1 The unified skew-normal distribution

1.1 Early development, applications and some open problems

In recent years, there has been a vigorous impulse in the development of flexible parametric families of distributions. This activity is specially lively and stimulating in the multivariate setting, correspondingly to the ever increasing availability and treatment of multivariate data in applied work.

An active direction of research within this process is represented by a family of continuous distributions which has originated as a generalization of the multivariate skew-normal (SN) distribution, which itself is a generalization of the classical normal distribution; for a review of the SN distribution and its ramifications, see Azzalini and Capitanio (2014). The generalization we are concerned with has originated from multiple independent sources, with some differences in the technical development, but with common underlying structure, as explained later. Specifically, González-Farías et al. (2004a) and González-Farías et al. (2004b) have developed the ‘closed skew-normal distribution’. Motivated by Bayesian inference considerations, Liseo and Loperfido (2003) have presented the ‘hierarchical skew-normal’. Another related construction is the ‘fundamental skew-normal’ proposed by Arellano-Valle and Genton (2005), who also consider a second version of closed skew-normal.

The interconnections among these apparently separate formulations have been examined by Arellano-Valle and Azzalini (2006), showing their essential equivalence, as well as the presence of overparameterizations in some cases. To accomplish their project, they introduced a unifying version which embraces the above-recalled specific proposals, removing at the same time the existing overparameterizations. This version was hence denoted ‘unified skew-normal (SUN) distribution’. Its main formal properties will be summarized in the next subsection. However, in essence, the constructive mechanism starts from a \((d+m)\)-dimensional normal distribution, where m of the components play a role of hidden variables which modify non-linearly the remaining d components via the presence of a certain conditioning event on the hidden components. The construction leads to a d-dimensional non-normal distribution, with the regular normal distribution included as a special case. We shall refer to this distribution as a \(\mathrm {SUN}{}_{d,m}\).

The SUN family constitutes a superset of the SN family, more specifically the so-called ‘extended skew-normal (ESN) family’, to which the SUN family reduces if \(m=1\). Its building mechanism based on m latent variables leads to certain properties not amenable to the SN and ESN distribution. An important specific fact is closure of the family with respect to convolution; specifically, the sum of two independent SUN variables of type \(\mathrm {SUN}{}_{d,m_1}\) and \(\mathrm {SUN}{}_{d,m_2}\) is of type \(\mathrm {SUN}{}_{d,m_1+m_2}\). This property has proved convenient in a number of operational formulations which employ the SUN distribution as its core stochastic component.

The closed skew-normal and the SUN distributions have been applied in a wide range of applied domains, and their relevance appears to be growing. The following is a non-exhaustive list of methodologies and applied domains where these distributions have been employed: stochastic frontier analysis in the context of productivity analysis, considered by Domínguez-Molina et al. (2007), Colombi (2013), Colombi et al. (2014), Kumbhakar and Lai (2016); various models for the analysis of spatial data have been introduced by Allard and Naveau (2007), Hosseini et al. (2011), Karimi and Mohammadzadeh (2012), Rimstad and Omre (2014), among others; analysis of longitudinal data for the distribution of random effects in work of Ghalani and Zadkarami (2019), and again Colombi (2013); combination of phase II and III clinical trials, by Azzalini and Bacchieri (2010); seismic inversion methodology for geological problems, by Karimi et al. (2010) and Rezaie et al. (2014); extended formulations of Kalman filter by Kim et al. (2014) and Rezaie and Eidsvik (2016); application to small area estimation by Diallo and Rao (2018). In the context of binary data, Durante (2019) has shown that, under Gaussian priors for the probit coefficients, the posterior distribution has an exact unified skew-normal distribution; this formulation lends itself to interesting developments, such as those of Fasano et al. (2019) and Fasano and Durante (2020).

While the SUN distribution is mathematically quite tractable and it enjoys a number of appealing formal properties, it is inevitably more complex than its progenitor, that is, the skew-normal distribution. Consequently there are several aspects which are still unexplored, or only partly explored; this situation concerns even some rather basic properties. A case in point is represented by the computation of the moments and associated quantities, of which little is known at present, as we shall discuss in more detail later on. Therefore, given the above extensive (even if non-exhaustive) list of applied problems where the SUN distribution provides theoretical support, advancements in the knowledge of the formal properties of SUN distribution can immediately be beneficial in the applied domain, although this side is not directly tackled here.

More specifically, the main target of the present contribution is represented by a set of results on the moments and derived quantities, considered in Sect. 2. Additional properties are examined in Sect. 3, namely the study of the log-concavity of the density, the conditional distribution of a SUN variable when some of its components belong to a given interval, and a possible alternative form of parameterization.

1.2 Main properties of the SUN family

We summarize the main facts about the SUN family; this term is used to embrace also the closed skew-normal and other essentially equivalent classes, provided a suitable parameterization is adopted. The notation here is the one of Sect. 7.1.2 of Azzalini and Capitanio (2014), which is largely the same of Arellano-Valle and Azzalini (2006), with minor variations.

For positive integers d and m, consider the \((d+m)\)-dimensional normal random variable

$$\begin{aligned} \begin{pmatrix} X_0\\ X_1\end{pmatrix} \sim \mathrm {N}{}_{d+m}\left( 0, \Omega ^* \right) \,, \qquad \Omega ^* = \begin{pmatrix} {{\bar{\Omega }}} &{} \Delta \\ \Delta ^{\top }&{} \Gamma \end{pmatrix}, \end{aligned}$$
(1)

where \(\Omega ^*\) is a full-rank correlation matrix. Define Z to be a d-dimensional random variable with the same distribution of \((X_0|X_1+\tau >0)\), where \(\tau =(\tau _1,\ldots ,\tau _m)^{\top }\) and the notation \(X_1+\tau >0\) means that the inequality sign must hold component-wise for each one of the m components. Next, introduce the transformed variable \(Y=\xi +\omega \,Z\), where \(\xi =(\xi _1,\ldots ,\xi _d)^{\top }\) and \(\omega \) is a \(d\times d\) diagonal matrix with positive diagonal elements \(\omega _1,\ldots , \omega _d\), and denote \(\Omega =\omega {{\bar{\Omega }}}\omega \). It can be show that the density of Y at \(x\in {\mathbb {R}}^d\) is

$$\begin{aligned} f_Y(x)= \varphi _d(x-\xi ;\Omega )\, \frac{\Phi _m\left\{ \tau + \Delta ^{\top }{{\bar{\Omega }}}^{-1}\omega ^{-1}(x-\xi ); \Gamma - \Delta ^{\top }{{\bar{\Omega }}}^{-1}\Delta \right\} }{\Phi _m(\tau ;\Gamma )} \end{aligned}$$
(2)

where \(\varphi _h(u;\Sigma )\) and \(\Phi _h(u;\Sigma )\) denote the \(\mathrm {N}{}_h(0,\Sigma )\) density function and distribution function at \(u\in {\mathbb {R}}^h\), respectively, for any symmetric \((h \times h)\) positive-definite matrix \(\Sigma \). In this case, we shall write \(Y \sim \mathrm {SUN}{}_{d,m}(\xi , \Omega , \Delta , \tau , \Gamma )\).

The SUN family enjoys numerous formal properties. For instance, we have already anticipated in Sect. 1.1 that this family is closed with respect to convolution. Many other interesting facts hold, but it would take too much space to review all such properties here, and we only recall those which are required for the subsequent development; additional information is summarized in Sect. 7.1 of Azzalini and Capitanio (2014). A key fact is the expression of the moment generating function, M(t) or, equivalently, the cumulant generating function of (2) as given byArellano-Valle and Azzalini (2006) is

$$\begin{aligned} K(t) = \log M(t) = \xi ^{\top }t + 2^{-1}t^{\top }\Omega t \, + \log \Phi _m(\tau +\Delta ^{\top }\omega t; \Gamma ) - \log \Phi _m(\tau ; \Gamma ), t\in {\mathbb {R}}^d;\nonumber \\ \end{aligned}$$
(3)

essentially as in González-Farías et al. (2004a) and González-Farías et al. (2004b), up to a change of parameterization. From this expression, many other results can be derived. One of them is represented by the rule for obtaining the distribution of an affine transformation: if a is a p-vector and A is a full-rank \(d\times p\) matrix, then

$$\begin{aligned} a+ A^{\top }Y \sim \mathrm {SUN}{}_{p,m}(a+A^{\top }\xi , A^{\top }\Omega A, \Delta _A, \tau , \Gamma ) \end{aligned}$$
(4)

where \(\Delta _A= \mathrm {Diag}(A^{\top }\Omega A)^{-1/2} A^{\top }\omega \Delta \), using the notation \(\mathrm {Diag}(M)\) to denote the diagonal matrix formed by the diagonal elements of a square matrix M, as in Mardia et al. (1979, p. 455). Clearly, (4) can be used to compute the distribution of p-dimensional marginals.

Another result to be used in our development is the expression of the distribution function, which has been given in Lemma 2.2.1 of González-Farías et al. (2004b). Since we adopt the SUN formulation for the reasons discusses by Arellano-Valle and Azzalini (2006), we shall use the equivalent expression, given by Azzalini and Bacchieri (2010),

$$\begin{aligned} F_Y(y)= {\mathbb {P}}_{}\!\left\{ \displaystyle {Y\le y}\right\} =\frac{\Phi _{d+m}({\tilde{z}}; {{\tilde{\Omega }}})}{\Phi _m(\tau ; \Gamma )} \end{aligned}$$
(5)

where

$$\begin{aligned} {\tilde{z}} = \begin{pmatrix} \omega ^{-1}(y-\xi ) \\ \tau \end{pmatrix}, \qquad {{\tilde{\Omega }}}= \begin{pmatrix} {{\bar{\Omega }}} &{} -\Delta \\ -\Delta ^{\top }&{} \Gamma \end{pmatrix}\,. \end{aligned}$$

There exist two stochastic representations of the SUN distribution, or equivalently two constructive ways to generate a random variable Y with density (2). The first of these is essentially the above-described process leading from the normal variable X in (1) to the variable Y, via the intermediate variable Z. This is denoted ‘representation by conditioning’ since it operates through the condition \(X_1+\tau >0\).

The other stochastic representation is of convolution type, that is, as the distribution of the sum of two independent random variables. Specifically, from the above-defined quantities, introduce \({{\bar{\Psi }}}_\Delta ={{\bar{\Omega }}}-\Delta \Gamma ^{-1}\Delta ^{\top }\), and the two independent variables \(U_0\sim \mathrm {N}{}_d(0,{{\bar{\Psi }}}_\Delta )\) and \(U_{1,-\tau }\) which is obtained by the component-wise truncation below \(-\tau \) of a variate \(U_1\sim \mathrm {N}{}_m(0,\Gamma )\). Then, \(Y \sim \mathrm {SUN}{}_{d,m}(\xi , \Omega , \Delta , \tau , \Gamma )\) can be expressed via the so-called additive representation

$$\begin{aligned} Y {\mathop {=}\limits ^{d}}\xi +\omega \,\left( U_0 + \Delta \Gamma ^{-1}\,U_{1,-\tau }\right) \end{aligned}$$
(6)

which will play a key role in our development. For a detailed discussion of the interplay of these two stochastic representations, see Sect. 2.1 of Arellano-Valle and Azzalini (2006).

Although the moment generating function M(t) has been known since the early work on this theme, it has not translated into decisive advances in the computation of moments and cumulants. Most of the available results for \( {\mathbb {E}}_{}\!\left\{ \displaystyle {Y}\right\} \) and \( \text {var}_{}\!\left\{ \displaystyle {Y}\right\} \) are limited is some way or another. For instance, results in Sect. 3 of Gupta et al. (2004) refer to the case \(m=d\), and even so they employ very involved auxiliary functions. For the case where \(\Gamma \) is a diagonal matrix, Arellano-Valle and Azzalini (2006) provide explicit expressions for the expected value and the variance matrix, applicable for all d and m.

To our knowledge, the general expression of \( {\mathbb {E}}_{}\!\left\{ \displaystyle {Y}\right\} \) has been obtained by Azzalini and Bacchieri (2010). This expression involves the following quantities: \(\tau _{-j}\) denotes the vector obtained by removing the j component of \(\tau \), for \(j=1,\dots ,m\); \(\Gamma _{-j}\) is the \((m-1)\times (m-1)\) matrix obtained by removing the jth row and column of \(\Gamma \); \(\gamma _{-j}\) denotes the jth column of \(\Gamma _{-j}\); finally, \({{\tilde{\Gamma }}}_{-j}= \Gamma _{-j} - \gamma _{-j}\gamma _{-j}^{\top }\). Then the mean value can be written as

$$\begin{aligned} {\mathbb {E}}_{}\!\left\{ \displaystyle {Y}\right\} = \left. \frac{\,\mathrm {d}K(t)}{\,\mathrm {d}t}\right| _{t=0} = \xi + \omega \,\Delta \, \frac{1}{\Phi _m(\tau ; \Gamma )}\,\nabla \Phi _m \end{aligned}$$
(7)

where \(\nabla \Phi _m\) is the m-vector with jth element

$$\begin{aligned} (\nabla \Phi _m)_j = {\left\{ \begin{array}{ll} \varphi (\tau _j) &{} \text { if }m=1, \\ \varphi (\tau _j)\,\Phi _{m-1}\left( \tau _{-j}-\Gamma _{-j}\tau _j;{{\tilde{\Gamma }}}_{-j}\right) &{}\text { if }m>1.\end{array}\right. } \end{aligned}$$
(8)

An expression of type (7) or (8) can be regarded as ‘essentially explicit’, at least for moderate values of m, even if it involves the distribution function of a multivariate normal distribution function, \(\Phi _m\). The phrase ‘essentially explicit’ seems justified in the light of the current advances for computing \(\Phi _m\), similarly to the process which, a few decades ago, has led to consider ‘explicit’ an expression involving the univariate normal distribution function, \(\Phi \).

Some intermediate expressions of the SUN variance matrix have been provided by Gupta and Aziz (2012) and Gupta et al. (2013), where the word ‘intermediate’ reflects the presence in their result of the matrix of the second derivatives of \(\Phi _m\). Since these second derivatives have been provided in an explicit form only for some special sub-cases of the SUN family, the question of the general expression of the SUN variance matrix appears to be open. This is the problem to be tackled in our next section, followed by consideration of higher order moments.

2 Moments and related quantities

2.1 The variance matrix

We compute the variance matrix \( \text {var}_{}\!\left\{ \displaystyle {Y}\right\} \) using second-order differentiation of the cumulant generating function (3). Write

$$\begin{aligned} \frac{\,\mathrm {d}K(t)}{\,\mathrm {d}t} = \xi + \Omega t + \frac{\,\mathrm {d}\log P(t)}{\,\mathrm {d}t} = \xi + \Omega t + \frac{1}{P(t)}\, \frac{\,\mathrm {d}P(t)}{\,\mathrm {d}t} \end{aligned}$$
(9)

where \(P(t) = \Phi _m(\tau + \Delta ^{\top }\omega t; \Gamma )\). The only non-obvious terms are \(\partial P/\partial t_j\), for \(j=1,\dots ,m\). Denote

$$\begin{aligned} u_j= (\tau + \Delta ^{\top }\omega t)_j =\tau _j+\Delta _j^{\top }\omega t, \end{aligned}$$

where \(\Delta _j\) is the jth column of \(\Delta \), for \(j=1,\ldots ,m\). For notational simplicity, we focus on \(j=1\) since the other terms are analogous. Write the joint m-normal density involved by P as the product of the first marginal component times the conditional density of the other components, leading to

$$\begin{aligned} P(t) = \int _{-\infty }^{u_1}\cdots \int _{-\infty }^{u_m} \varphi (x_1) \, \varphi _{m-1}(x_{-1} -\mu _{-1}(x_1); {{\tilde{\Gamma }}}_{-1})\, \,\mathrm {d}x_1 \,\mathrm {d}x_{-1} \end{aligned}$$
(10)

where \(x_{-1}\) is the \((d-1)\)-vector obtained by removing the first component of x, \(\mu _{-1}(x_1)= \gamma _{-1} x_1\) denotes the mean value of the conditional normal distribution when the first component of \(\mathrm {N}{}_m(0,\Gamma )\) is fixed at \(x_1\), and \({{\tilde{\Gamma }}}_{-1}\) denotes the corresponding variance matrix; we have used the quantities introduced in connection with (7). Therefore

$$\begin{aligned} \frac{\partial P}{\partial t_1} = \frac{\partial u_1}{\partial t_1}\,\frac{\partial P}{\partial u_1}= & {} \left( \omega \Delta \right) _1 \, \varphi (u_1)\, \int _{-\infty }^{u_2}\cdots \int _{-\infty }^{u_m} \, \varphi _{m-1}(x_{-1} -\mu _{-1}(u_1); {{\tilde{\Gamma }}}_{-1})\, \,\mathrm {d}x_{-1} \nonumber \\= & {} \left( \omega \Delta \right) _1 \, \varphi (u_1)\, \Phi _{m-1}(\tau _{-1}- \gamma _{-1}u_1; {{\tilde{\Gamma }}}_{-1}) \end{aligned}$$
(11)

where the term \(\Phi _{m-1}(\cdot {;}\cdot )\) must be interpreted as 1 when \(m=1\). This convention will apply also to subsequent expressions.

Application of (11) with the other values of the subscript j produces the entire gradient of P. Next, evaluation of the gradient (9) at \(t=0\) delivers the mean vector (7).

The second derivative of K(t) is obtained by differentiation of (9), yielding

$$\begin{aligned} \frac{\,\mathrm {d}^2 K(t)}{\,\mathrm {d}t\, \,\mathrm {d}t^{\top }} = \Omega + \frac{\,\mathrm {d}}{\,\mathrm {d}t^{\top }}\left( \frac{\,\mathrm {d}\log P(t)}{\,\mathrm {d}t}\right) \end{aligned}$$
(12)

where two generic entries of the final term are of the type

$$\begin{aligned} \frac{\partial ^2 \log P}{\partial t_1 \partial t_2} = - \frac{1}{P^2}\displaystyle \frac{\partial P}{\partial t_1}\displaystyle \frac{\partial P}{\partial t_2} + \frac{1}{P} \displaystyle \frac{\partial ^2 P}{\partial t_1\,\partial t_2}. \end{aligned}$$

The first summand on the right side is the product of quantities of type (11). For the second summand consider first the case with \(t_1\not =t_2\), and follow a similar logic used for (10), but now separate out two components. Focusing of the first two components, for notational simplicity, write

$$\begin{aligned} P(t) = \int _{-\infty }^{u_1}\cdots \int _{-\infty }^{u_m} \varphi _2(x_{1:2}; \Gamma _{1:2}) \, \varphi _{m-2}\left( x_{-(1:2)} -\mu _{-(1:2)}(x_{1:2}); {{\tilde{\Gamma }}}_{-(1:2)}\right) \, \,\mathrm {d}x_{1:2} \,\mathrm {d}x_{-(1:2)} \end{aligned}$$
(13)

where \(x_{1:2}=(x_1,x_2)^\top \), \(\Gamma _{1:2}\) is the submatrix of \(\Gamma \) formed by its top-left \(2\times 2\) block, and so on, in the same logic and notational scheme used before.

Here we have implicitly assumed that \(m\ge 2\). This is a legitimate assumption since the case with \(m=1\) corresponds to the ESN distribution, for which \( \text {var}_{}\!\left\{ \displaystyle {Y}\right\} \) has been given by Capitanio et al. (2003) along with other moment-related results of the ESN distribution.

The mixed derivative at \(t_1=t_2=0\) is

$$\begin{aligned} \left. \frac{\partial ^2 P(t)}{\partial t_1 \partial t_2}\right| _{t_1=0, t_2=0}= & {} \left( \omega \Delta \right) _{1:2}\, \varphi _2(\tau _{1:2}) \, \Phi _{m-2}(\tau _{-(1:2)} - \mu _{-(1:2)}(\tau _{1:2}); {{\tilde{\Gamma }}}_{-(1:2)}) \left( \Delta ^{\top }\omega \right) _{1:2} \nonumber \\ \end{aligned}$$
(14)

where \(\mu _{-(1:2)}(\tau _{1:2})\) denotes the conditional mean of the components \((3,\dots ,m)\) conditionally on \(x_{1:2}=\tau _{1:2}\) and \({{\tilde{\Gamma }}}_{-(1:2)}\) denotes the conditional variance. It must be intended that the term \(\Phi _{m-2}(\cdot )\) is 1 when \(m=2\). Expression (14) is immediately adapted to any two other components \((t_j, t_k)\), provided \(j\not =k\).

When \(j=k\), take \(j=k=1\) for simplicity of notation and write

$$\begin{aligned} \frac{\partial ^2 \log P}{\partial t_1^2} = \frac{\partial }{\partial t_1} \left( \frac{1}{P}\, \frac{\partial P}{\partial t_1}\right) = -\frac{1}{P^2}\left( \frac{\partial P}{\partial t_1}\right) ^2 + \frac{1}{P} \left( \frac{\partial ^2 P}{\partial t_1^2}\right) \end{aligned}$$

where \((\partial P/\partial t_1)\) is given by (11). Consider its core part \((\partial P/\partial u_1)\) and take the successive derivative

$$\begin{aligned} \frac{\partial ^2 P}{\partial u_1^2}= & {} \frac{\partial }{\partial u_1} \left( \frac{d}{\partial u_1} \varphi (u_1) \,\Phi _{m-1}(\tau _{-1}- \mu _1(u_1); {{\tilde{\Gamma }}}_{-1})\right) \\= & {} \frac{\partial }{\partial u_1} \, \left( \varphi (u_1) \,\Phi _{m-1}(\tau _{-1}- \gamma _{-1}u_1; {{\tilde{\Gamma }}}_{-1})\right) \\= & {} - u_1 \varphi (u_1) \Phi _{m-1}(\tau _{-1}- \gamma _{-1}u_1; {{\tilde{\Gamma }}}_{-1}) \\&+ \varphi (u_1) \varphi _{m-1}(\tau _{-1}- \gamma _{-1}u_1; {{\tilde{\Gamma }}}_{-1}) 1_{m-1}^{\top }\frac{\partial (\tau _{-1}- \gamma _{-1}u_1) }{\partial u_1} \\= & {} - u_1 \varphi (u_1) \Phi _{m-1}(\tau _{-1}- \gamma _{-1}u_1; {{\tilde{\Gamma }}}_{-1}) \\&+ \varphi (u_1) \varphi _{m-1}(\tau _{-1}- \gamma _{-1}u_1; {{\tilde{\Gamma }}}_{-1}) 1_{m-1}^{\top }(- \gamma _{-1}) . \end{aligned}$$

Hence the second derivative \((\partial ^2 P/\partial t_1^2)\) evaluated at \(t_1=0\) is

$$\begin{aligned} \left. \frac{\partial ^2 P}{\partial t_1^2}\right| _{t_1=0}= & {} (\omega \Delta )_{11} \,\big \{ -\tau _1 \varphi (\tau _1)\,\Phi _{m-1}(\tau _{-1}-\gamma _{-1}\tau _1;{{\tilde{\Gamma }}}_{-1}) \nonumber \\&\quad - \varphi (\tau _1) \varphi _{m-1}(\tau _{-1}- \gamma _{-1}\tau _1; {{\tilde{\Gamma }}}_{-1}) 1_{m-1}^{\top }\gamma _{-1} \big \}\,(\Delta ^{\top }\omega )_{11} \nonumber \\= & {} (\omega \Delta )_{11} \,\big \{- \varphi (\tau _1) \big [ \tau _1\,\Phi _{m-1}(\tau _{-1}-\gamma _{-1}\tau _1;{{\tilde{\Gamma }}}_{-1}) \nonumber \\&\quad + \varphi _{m-1}(\tau _{-1}- \gamma _{-1}\tau _1; {{\tilde{\Gamma }}}_{-1}) \, 1_{m-1}^{\top }\gamma _{-1}\big ]\big \}\, (\Delta ^{\top }\omega )_{11} \end{aligned}$$
(15)

which, similarly to earlier expressions, must be replicated for the other values of j.

Finally, as a general expression encompassing all terms in a matrix notation, we arrive at

$$\begin{aligned} \text {var}_{}\!\left\{ \displaystyle {Y}\right\} = \Omega + \omega \Delta H \Delta ^{\top }\omega = \Sigma , \end{aligned}$$
(16)

say, where H is the matrix formed by the elements other than \(\Delta \omega \) given in (11), (14) and (15). Even if we have derived (16) under the assumption that \(m\ge 2\), a subsequent inspection has shown that the expression remains valid provided the above derivatives are computed setting the \(\varphi _{m-1}\) and \(\Phi _{m-1}\) terms equal to 1 when \(m=1\), as we recover the known expressions for the ESN distribution. With this convention, (16) holds for all m.

Starting from a different motivation, expressions for the derivatives of \(\Phi _m\) similar to those obtained above have been presented in Lemma 2.3 of Arellano-Valle et al. (2013). Their motivation was the computation of mean value and the variance matrix of the truncated multivariate normal distribution, which are given in their Lemma 2.2. Taking into account the additive representation (6) of a SUN variable, those expression could also be used to derive the SUN lower moments.

2.2 Higher-order moments

While in principle one could consider successive differentiations of K(t) to compute higher-order moments, this process becomes algebraically cumbersome. We therefore follow another route, based on the additive representation (6).

Our plan of work is as follows. A preliminary step is the development of various expressions concerning moments of the sum of two independent random vectors, which are presented separately in an appendix. Simplification can be obtained by the using the fact that one of the components of (6) is a zero-mean normal variable. On this front, we benefit from the extensive literature on computational method for the moments of a multivariate truncated normal distribution. Combining representation (6) with these results for the moments of a truncated normal variable, we obtain expressions for the desired SUN moments.

For a p-dimensional random variable X, define its moments up to the fourth order as

$$\begin{aligned} \mu _1(X)= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {X}\right\} ,\\ \mu _2(X)= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {X\otimes X^{\top }}\right\} = {\mathbb {E}}_{}\!\left\{ \displaystyle {XX^{\top }}\right\} ,\\ \mu _3(X)= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {X\otimes X^{\top }\otimes X}\right\} = {\mathbb {E}}_{}\!\left\{ \displaystyle {X\otimes XX^{\top }}\right\} = {\mathbb {E}}_{}\!\left\{ \displaystyle {XX^{\top }\otimes X}\right\} = {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (XX^{\top })X^{\top }}\right\} ,\\ \mu _4(X)= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {X\otimes X^{\top }\otimes X\otimes X^{\top }}\right\} = {\mathbb {E}}_{}\!\left\{ \displaystyle {XX^{\top }\otimes XX^{\top }}\right\} = {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (XX^{\top })\mathop {{\mathrm {vec}}}\nolimits (XX^{\top })^{\top }}\right\} . \end{aligned}$$

provided the involved expected values exist. The equivalence of the various expressions for a given moment follows from standard properties of the Kronecker product. The \(\mathop {{\mathrm {vec}}}\nolimits \) operator stacks the columns of a matrix in a single vector.

Also, the following notation will be used, adopted from Magnus and Neudecker (1979) and Neudecker and Wansbeek (1983). For arbitrary natural numbers spq, denote by \(e_{i:s}\) the ith s-dimensional unit vector formed by all 0’s except a 1 in the ith position, and from here define \(E_{ij}=e_{i:p}e_{j:q}^{\top }\). Further, denote by

$$\begin{aligned} K_{pq}=\sum _{i=1}^p\sum _{j=1}^qE_{ij}\otimes E_{ij}^{\top }\end{aligned}$$

the pq-dimensional square commutation matrix and let \(K_r=K_{rr}\).

For algebraic convenience, we rewrite (6) in an equivalent form. Introduce the quantities

$$\begin{aligned} \Lambda = \omega \Delta \Gamma ^{-1}, \qquad \Psi = \Omega - \omega \Delta \Gamma ^{-1}\Delta ^{\top }\omega \end{aligned}$$
(17)

and denote by \(\Psi ^{1/2}\) the unique symmetric square root of \(\Psi \); however, it would make no difference if another square root of \(\Psi \) is considered. The fact that \(\Psi >0\) follows from the assumption that \(\Omega ^*\) in (1) has full rank. Given a \((d+m)\)-dimensional variable

$$\begin{aligned} Z_0 = \begin{pmatrix} V \\ W\end{pmatrix} \sim \mathrm {N}{}_{d+m} \left( \left( \begin{matrix}0\\ 0\end{matrix}\right) , \,\left( \begin{matrix}I_d &{} 0 \\ 0 &{} \Gamma \end{matrix}\right) \right) \end{aligned}$$
(18)

denote \(U{\mathop {=}\limits ^{d}}(W\mid W+\tau >0)\), so that (6) becomes

$$\begin{aligned} Y {\mathop {=}\limits ^{d}}\xi + X = \xi + \Lambda \,U + \Psi ^{1/2}\,V \,. \end{aligned}$$
(19)

Proposition 1

(SUN moments) Consider \(X=\Lambda U+ \Psi ^{1/2}V\sim \mathrm {SUN}{}_{d,m}(0,\Omega ,\Delta ,\tau ,\Gamma )\), where U, V and other involved quantities are defined in connection with expressions (17)–(19). Then:

$$\begin{aligned} \mu _1(X)= & {} \Lambda \mu _1(U),\\ \mu _2(X)= & {} \Lambda \mu _2(U)\Lambda ^{\top }+ \Psi ,\\ \mu _3(X)= & {} (\Lambda \otimes \Lambda )\mu _3(U)\Lambda ^{\top }+ (I_{d^2}+K_d)(\Lambda \mu _1(U)\otimes \Psi ) + \mathop {{\mathrm {vec}}}\nolimits (\Psi )\mu _1(U)^{\top }\Lambda ^{\top },\\ \mu _4(X)= & {} (\Lambda \otimes \Lambda )\mu _4(U)(\Lambda \otimes \Lambda )^{\top }+ \mathop {{\mathrm {vec}}}\nolimits (\Psi )\mathop {{\mathrm {vec}}}\nolimits (\Psi )^{\top }\\&+ (I_{d^2}+K_d)\big \{(\Lambda \mu _2(U)\Lambda ^{\top }\otimes \Psi ) + (\Psi \otimes \Lambda \mu _2(U)\Lambda ^{\top }) + \Psi \otimes \Psi \big \}\\&+ \mathop {{\mathrm {vec}}}\nolimits (\Lambda \mu _2(U)\Lambda ^{\top })\mathop {{\mathrm {vec}}}\nolimits (\Psi )^{\top }+ \mathop {{\mathrm {vec}}}\nolimits (\Psi )\mathop {{\mathrm {vec}}}\nolimits (\Lambda \mu _2(U)\Lambda ^{\top })^{\top }\,. \end{aligned}$$

Moreover, the variance matrix of X is

$$\begin{aligned} \text {var}_{}\!\left\{ \displaystyle {X}\right\} =\Omega -\Lambda (\Gamma -\Sigma _U)\Lambda ^{\top }= \Sigma , \end{aligned}$$
(20)

say, having set \(\Sigma _U= \text {var}_{}\!\left\{ \displaystyle {U}\right\} =\mu _2(U)-\mu _1(U)\mu _1(U)^{\top }\). The inequality \(\Omega > \Sigma \) holds, in the sense that the difference of the matrices is positive definite.

Proof

Make use of Proposition A.5 in the Appendix for the moments of a linear combination of two independent multivariate variables, combined with expressions in Proposition A.3 for the moments of V under the assumption \(V\sim \mathrm {N}{}_d(0, I_d)\). After some algebraic simplifications, one arrives at the stated expressions. The term \(\Gamma -\Sigma _U\) of (20) is a positive definite matrix, taking account (A.3) in the appendix, which in the present case holds in the strict version of the matrix inequality. This implies that \(\Omega > \Sigma \). \(\square \)

The expressions \(\mu _k(X)\) given in Proposition 1 refer to a SUN variable X with location parameter \(\xi =0\). For the general case with arbitrary \(\xi \), consider the shifted variable \(Y=\xi +X\) and use expressions A.5A.8 in the appendix. Another annotation is that Proposition 1 includes an expression of \( \text {var}_{}\!\left\{ \displaystyle {X}\right\} \) alternative to (16).

The actual usage of the expressions provided in Proposition 1 requires knowledge of the moments of the truncated normal component, \(\mu _k(U)\). In general, these moments are not amenable to explicit treatment, and one must resort on numerical computations. As already mentioned, there exists a vast literature concerned with this problem, and its exhaustive review would take far too much space. We therefore indicate only some recent results, referring the reader to the references quoted therein for earlier developments. Among the more recent proposals, we mention the methods for computing these moments presented by Arismendi (2013) and Kan and Robotti (2017). For the latter approach, there exist publicly available computing routines written by the authors in the Matlab language. Of these routines, a corresponding version is available in the R computing environment via either of its packages mnormt (Azzalini and Genz 2020) or MomTrunc (Galarza et al. 2020).

We now want to obtain expressions for the Mardia’s measures of multivariate skewness and kurtosis, denoted \(\beta _{1,d}\) and \(\beta _{2,d}\) in the original publications of Mardia (1970, 1974), apart from the symbol d adopted here to denote the dimensionality. To simplify the algebraic work, it is convenient to work with a suitably transformed variable, exploiting the invariance properties of Mardia’s measures with respect to nonsingular affine transformations. For a random variable \(Y\sim \mathrm {SUN}{}_{d,m}(\xi ,\Omega ,\Delta ,\tau ,\Gamma )\), consider again its representation (19), and introduce additionally

$$\begin{aligned} \mu _0 = \Lambda \,\mu _1(U), \quad \mu = {\mathbb {E}}_{}\!\left\{ \displaystyle {Y}\right\} =\xi +\mu _0,\quad Y = \mu + X_0, \quad X_0=X-\mu _0, \quad U_0=U - \mu _1(U) \end{aligned}$$

where X is as in Proposition 1. Ruling out degenerate cases, \(\Sigma = \text {var}_{}\!\left\{ \displaystyle {Y}\right\} = \text {var}_{}\!\left\{ \displaystyle {X_0}\right\} \) is non-singular. Consider then any non-singular \(d \times d\) matrix C such that \(\Sigma =C\,C^{\top }\); although not strictly necessary, a common choice is to set \(C=\Sigma ^{1/2}\), the unique symmetric positive-definite square root of \(\Sigma \). Next, introduce the standardized variable

$$\begin{aligned} {\tilde{Z}}= & {} C^{-1}(Y-\mu ) = C^{-1}\Lambda U_0 + C^{-1}\Psi ^{1/2} V\nonumber \\\sim & {} \mathrm {SUN}{}_{d,m}\left( -C^{-1}\mu _0, C^{-1}\Omega (C^{-1})^{\top }, C^{-1}\omega \Delta , \tau , \Gamma \right) \end{aligned}$$
(21)

such that

$$\begin{aligned} {\mathbb {E}}_{}\!\left\{ \displaystyle {{{\tilde{Z}}}}\right\} =0, \qquad \text {var}_{}\!\left\{ \displaystyle {{{\tilde{Z}}}}\right\} = I_d \,. \end{aligned}$$

On setting \({{\tilde{\Lambda }}} = C^{-1}\Lambda \) and \({{\tilde{\Psi }}}=C^{-1}\Psi (C^{-1})^{\top }\), where \(\Lambda \) and \(\Psi \) are given in (17), and a matching definition of \({{\tilde{\Psi }}}^{1/2}\), we also note that

$$\begin{aligned} {{\tilde{Z}}}\buildrel d\over = {{\tilde{\Lambda }}} U_0 + {{\tilde{\Psi }}}^{1/2} V \,. \end{aligned}$$
(22)

where \(\buildrel d\over = \) means identically distributed.

The reason for introducing the variable \({{\tilde{Z}}}\) is represented by the following fact. For a standardized variable \(X^*\), say, having zero mean vector and identity variance matrix, the Mardia’s measures can be conveniently computed using the expressions given by Kollo and Srivastava (2005), namely

$$\begin{aligned} \beta _{1,d} = {\mathrm{tr}}\{\mu _3(X^*)^{\top }\mu _3(X^*)\} = \mathop {{\mathrm {vec}}}\nolimits \{\mu _3(X^*)\}^{\top }\mathop {{\mathrm {vec}}}\nolimits \{\mu _3(X^*)\}\,, \qquad \quad \beta _{2,d} = {\mathrm{tr}}\{\mu _4(X^*)\}\,. \end{aligned}$$

The next statement presents the evaluation of these expressions for the SUN variable \({\tilde{Z}}\).

Proposition 2

For the random variable \({{\tilde{Z}}}\) specified as in (21) or, equivalently, as in (22), the following expected values hold:

$$\begin{aligned} \mu _3({{\tilde{Z}}})= & {} ({{\tilde{\Lambda }}}\otimes {{\tilde{\Lambda }}})\mu _3(U_0){{\tilde{\Lambda }}}^{\top },\\ \mu _4({{\tilde{Z}}})= & {} ({{\tilde{\Lambda }}}\otimes {{\tilde{\Lambda }}})\mu _4(U_0)({{\tilde{\Lambda }}}\otimes {{\tilde{\Lambda }}})^{\top }\\&+ (I_{p^2}+ K_{p})({{\tilde{\Lambda }}}\Sigma _U {{\tilde{\Lambda }}}^{\top }\otimes {{\tilde{\Psi }}} + {{\tilde{\Psi }}}\otimes {{\tilde{\Lambda }}}\Sigma _U {{\tilde{\Lambda }}}^{\top }+ {{\tilde{\Psi }}}\otimes {{\tilde{\Psi }}})\\&+ \mathop {{\mathrm {vec}}}\nolimits ({{\tilde{\Lambda }}}\Sigma _U {{\tilde{\Lambda }}}^{\top })\mathop {{\mathrm {vec}}}\nolimits ({{\tilde{\Psi }}})^{\top }+ \mathop {{\mathrm {vec}}}\nolimits ({{\tilde{\Psi }}})\mathop {{\mathrm {vec}}}\nolimits ({{\tilde{\Lambda }}}\Sigma _U {{\tilde{\Lambda }}}^{\top })^{\top }+ \mathop {{\mathrm {vec}}}\nolimits ({{\tilde{\Psi }}})\mathop {{\mathrm {vec}}}\nolimits ({{\tilde{\Psi }}})^{\top }, \end{aligned}$$

where \(\Sigma _U=\mu _2(U_0)= \text {var}_{}\!\left\{ \displaystyle {U}\right\} \), \({{\tilde{\Lambda }}} = C^{-1}\Lambda \), \({{\tilde{\Psi }}}=C^{-1}\Psi (C^{-1})^{\top }\), and \(C\,C^{\top }=\Sigma \). Moreover, the Mardia’s measures of multivariate skewness and kurtosis are

$$\begin{aligned} \beta _{1,d}= & {} {\mathrm{tr}}\{(\Lambda ^{\top }\Sigma ^{-1}\Lambda \otimes \Lambda ^{\top }\Sigma ^{-1}\Lambda )\mu _3(U_0)\Lambda ^{\top }\Sigma ^{-1}\Lambda \mu _3(U_0)^{\top }\}\\= & {} \mathop {{\mathrm {vec}}}\nolimits \{\mu _3(U_0)\}^{\top }({{\tilde{\Lambda }}}^{\top }{{\tilde{\Lambda }}}\otimes {{\tilde{\Lambda }}}^{\top }{{\tilde{\Lambda }}}\otimes {{\tilde{\Lambda }}}^{\top }{{\tilde{\Lambda }}})\mathop {{\mathrm {vec}}}\nolimits \{\mu _3(U_0)\},\\ \beta _{2,d}= & {} {\mathrm{tr}}\{(\Lambda ^{\top }\Sigma ^{-1}\Lambda \otimes \Lambda ^{\top }\Sigma ^{-1}\Lambda )\mu _4(U_0)\} + 2{\mathrm{tr}}(\Sigma _U \Lambda ^{\top }\Sigma ^{-1}\Lambda ){\mathrm{tr}}\{\Psi \Sigma ^{-1}\}\\&+ {\mathrm{tr}}\{\Psi \Sigma ^{-1}\}^2 + 4{\mathrm{tr}}\{\Sigma _U\Lambda ^{\top }\Sigma ^{-1}\Psi \Sigma ^{-1}\Lambda \} + 2{\mathrm{tr}}\{\Psi \Sigma ^{-1}\Psi \Sigma ^{-1}\}, \end{aligned}$$

where

$$\begin{aligned} \mu _3(U_0)= & {} (I_{q^2} + K_q)\{\mu _1(U)\otimes \mu _1(U)\mu _1(U)^{\top }- \mu _1(U)\otimes \mu _2(U)\} \\&- \mathop {{\mathrm {vec}}}\nolimits \{\mu _2(U)\}\mu _1(U)^{\top }+ \mu _3(U),\\ \mu _4(U_0)= & {} -3 \mu _1(U)\mu _1(U)^{\top }\otimes \mu _1(U)\mu _1(U)^{\top }+ (I_{q^2} + K_q)\{ \mu _1(U)\mu _1(U)^{\top }\otimes \mu _2(U) \\&+ \mu _2(U)\otimes \mu _1(U)\mu _1(U)^{\top }- (\mu _1(U)\otimes I_q)\mu _3(U)^{\top }- \mu _3(U)(\mu _1(U)^{\top }\otimes I_q)\} \\&+ \mathop {{\mathrm {vec}}}\nolimits \{\mu _2(U)\}(\mu _1(U)\otimes \mu _1(U))^{\top }+ (\mu _1(U)\otimes \mu _1(U))\mathop {{\mathrm {vec}}}\nolimits \{\mu _2(U)\}^{\top }+ \mu _4(U)\,. \end{aligned}$$

Proof

The expressions of \(\mu _3({{\tilde{Z}}})\) and \(\mu _4({{\tilde{Z}}})\) follow directly from Proposition 1, by using it with the terms \(X, \Lambda , \Psi \) specified as \({\tilde{Z}}, {{\tilde{\Lambda }}}, {{\tilde{\Psi }}}\).

Therefore, we concentrate on the derivation of \(\beta _{1,p}\) and \(\beta _{2,p}\) only. An algebraically convenient route to obtain these quantities is from the stochastic representation in (21). Denote by \(Y'=\mu +C\,{{\tilde{Z}}}'\) an independent replicate of \(Y=\mu +C\,{{\tilde{Z}}}\), so that the Mardia’s measure of skewness can be expressed as

$$\begin{aligned} \beta _{1,d}= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {[(Y-\mu )^{\top }\Sigma ^{-1}(Y'-\mu )]^3}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {({{\tilde{Z}}}^{\top }{{\tilde{Z}}}')^3}\right\} . \end{aligned}$$

First, introduce the matrices \(M_{00}= \Lambda ^{\top }\Sigma ^{-1}\Lambda \), \(M_{01}= \Lambda ^{\top }\Sigma ^{-1}\Psi ^{1/2}\), \(M_{10}= \Psi ^{1/2}\Sigma ^{-1}\Lambda =M_{01}^{\top }\), and \(M_{11}= \Psi ^{1/2}\Sigma ^{-1}\Psi ^{1/2}\). Then expand

$$\begin{aligned} \left( {{\tilde{Z}}}^{\top }{{\tilde{Z}}}'\right) ^3= & {} [(U_0^{\top }\Lambda ^{\top }(C^{-1})^{\top }+V^{\top }\Psi ^{1/2} (C^{-1})^{\top }) (C^{-1}\Lambda \,U_0'+C^{-1}\Psi ^{1/2} V')]^3\\= & {} (U_0^{\top }M_{00}U_0'+U_0^{\top }M_{01} V' + V^{\top }M_{01}^{\top }U_0'+V^{\top }M_{11} V')^3\\= & {} (U_0^{\top }M_{00}U_0'+U_0^{\top }M_{01} V')^3\\&+ 3(U_0^{\top }M_{00}U_0'+U_0^{\top }M_{01} V')^2(V^{\top }M_{01}^{\top }U_0'+V^{\top }M_{11} V')\\&+ 3(U_0^{\top }M_{00}U_0'+U_0^{\top }M_{01} V')(V^{\top }M_{01}^{\top }U_0'+V^{\top }M_{11} V')^2\\&+ (V^{\top }M_{01}^{\top }U_0'+V^{\top }M_{11} V')^3\\= & {} (U_0^{\top }M_{00}U_0')^3 + 3(U_0^{\top }M_{00}U_0')^2(U_0^{\top }M_{01} V')\\&+ 3(U_0^{\top }M_{00}U_0')(U_0^{\top }M_{01} V')^2 + (U_0^{\top }M_{01} V')^3\\&+ 3[\{(U_0^{\top }M_{00}U_0')^2+2U_0^{\top }M_{00}U_0'U_0^{\top }M_{01} V'+(U_0^{\top }M_{01} V')^2\} V^{\top }M_{01}^{\top }U_0'\\&+\{(U_0^{\top }M_{00}U_0')^2+2U_0^{\top }M_{00}U_0'U_0^{\top }M_{01} V'+(U_0^{\top }M_{01} V')^2\} V^{\top }M_{11} V']\\&+ 3[U_0^{\top }M_{00}U_0'\{(V^{\top }M_{01}^{\top }U_0')^2+2V^{\top }M_{01}^{\top }U_0'V^{\top }M_{11} V'+(V^{\top }M_{11} V')^2\}\\&+U_0^{\top }M_{01} V'\{(V^{\top }M_{01}^{\top }U_0')^2+2(V^{\top }M_{01}^{\top }U_0'V^{\top }M_{11} V'+(V^{\top }M_{11} V')^2\}]\\&+ (V^{\top }M_{01}^{\top }U_0')^3 + 3(V^{\top }M_{01}^{\top }U_0')^2(V^{\top }M_{11} V')\\&+ 3(V^{\top }M_{01}^{\top }U_0')(V^{\top }M_{11} V')^2 + (V^{\top }M_{11} V')^3. \end{aligned}$$

Take into account that \(U_0\) and \(U_0'\) are independent and identically distributed (i.i.d.) random vectors with mean zero, as well as that V and \(V'\) are i.i.d. random vectors with spherical normal distribution, so that all the terms involving odd functions of V or \(V'\) have zero expectation. Consider also that \(U_0\), \(U_0'\), V and \(V'\) are mutually independent. Then, by taking the expectation and removing the terms with zero mean, we have

$$\begin{aligned} {\mathbb {E}}_{}\!\left\{ \displaystyle {\left( {{\tilde{Z}}}^{\top }{{\tilde{Z}}}'\right) ^3}\right\}= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {\left( U_0^{\top }M_{00}U_0'\right) ^3}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {U_0^{\top }M_{00}U_0'U_0^{\top }M_{00}U_0'U_0^{\top }M_{00}U_0'}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0')^{\top }M_{00}U_0U_0^{\top }M_{00}U_0'(U_0')^{\top }M_{00}U_0}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{00})U_0U_0^{\top }M_{00}U_0'(U_0')^{\top }M_{00}U_0(U_0')^{\top }}\right\} . \end{aligned}$$

Now use the equality \({\mathrm{tr}}(EFGH)=\mathop {{\mathrm {vec}}}\nolimits (H^{\top })^{\top }(G^{\top }\otimes E)\mathop {{\mathrm {vec}}}\nolimits (F)\) given in Lemma 3 of Magnus and Neudecker (1986) with \(E=M_{00}U_0U_0^{\top }M_{00}\), \(F=U_0'(U_0')^{\top }\), \(G=M_{00}U_0\) and \(H=(U_0')^{\top }\), and write

$$\begin{aligned} \beta _{1,d}= {\mathbb {E}}_{}\!\left\{ \displaystyle {\left( {{\tilde{Z}}}^{\top }{{\tilde{Z}}}'\right) ^3}\right\}= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (U_0')^{\top }(U_0^{\top }M_{00}\otimes M_{00}U_0U_0^{\top }M_{00})\mathop {{\mathrm {vec}}}\nolimits (U_0'(U_0')^{\top })}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle { (U_0')^{\top }M_{00}(U_0^{\top }\otimes U_0U_0^{\top })(M_{00}\otimes M_{00})(U_0'\otimes U_0')}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}\{(M_{00}\otimes M_{00})(U_0'(U_0')^{\top }\otimes U_0')M_{00}(U_0^{\top }\otimes U_0U_0^{\top })\}}\right\} \\= & {} {\mathrm{tr}}\left\{ (M_{00}\otimes M_{00}) {\mathbb {E}}_{}\!\left\{ \displaystyle {U_0'(U_0')^{\top }\otimes U_0'}\right\} M_{00} {\mathbb {E}}_{}\!\left\{ \displaystyle {U_0^{\top }\otimes U_0U_0^{\top })}\right\} \right\} \\= & {} {\mathrm{tr}}\left\{ (M_{00}\otimes M_{00})\mu _3(U_0')M_{00} \mu _3(U_0)^{\top }\right\} , \end{aligned}$$

where \(\mu _3(U_0')=\mu _3(U_0)\) since \(U_0'\) and \(U_0\) are i.i.d. variables.

Proceeding in a similar way for the measure of kurtosis, we have

$$\begin{aligned} \beta _{2,d}= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {\left[ (Y-\mu )^{\top }\Sigma ^{-1}(Y-\mu )\right] ^2}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {(Z^{\top }Z)^2}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {\left[ \left( U_0^{\top }\Lambda ^{\top }(C^{-1})^{\top }+ V^{\top }\Psi ^{1/2} (C^{-1})^{\top }\right) \,\left( C^{-1}\Lambda \,U_0+C^{-1}\Psi ^{1/2} V\right) \right] ^2}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0+U_0^{\top }M_{01} V + V^{\top }M_{01}^{\top }U_0+V^{\top }M_{11} V)^2}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0+U_0^{\top }M_{01} V )^2}\right\} \\&+2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0+U_0^{\top }M_{01} V)(V^{\top }M_{01}^{\top }U_0+V^{\top }M_{11} V)}\right\} \\&+ {\mathbb {E}}_{}\!\left\{ \displaystyle {(V^{\top }M_{01}^{\top }U_0+V^{\top }M_{11} V)^2}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0)^2}\right\} +2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0)(U_0^{\top }M_{01} V )}\right\} + {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{01} V )^2}\right\} \\&+2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0)(V^{\top }M_{01}^{\top }U_0+V^{\top }M_{11} V)}\right\} \\&+2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{01} V)(V^{\top }M_{01}^{\top }U_0+V^{\top }M_{11} V)}\right\} \\&+ {\mathbb {E}}_{}\!\left\{ \displaystyle {(V^{\top }M_{01}^{\top }U_0)^2}\right\} +2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(V^{\top }M_{01}^{\top }U_0)(V^{\top }M_{11} V)}\right\} + {\mathbb {E}}_{}\!\left\{ \displaystyle {(V^{\top }M_{11} V)^2}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0)^2}\right\} + {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{01} V )^2}\right\} \\&+2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{00}U_0)(V^{\top }M_{11}V)}\right\} +2 {\mathbb {E}}_{}\!\left\{ \displaystyle {(U_0^{\top }M_{01}V)(V^{\top }M_{01}^{\top }U_0)}\right\} \\&+ {\mathbb {E}}_{}\!\left\{ \displaystyle {(V^{\top }M_{01}^{\top }U_0)^2}\right\} + {\mathbb {E}}_{}\!\left\{ \displaystyle {(V^{\top }M_{11}V)^2}\right\} , \end{aligned}$$

where the terms with zero expectation have been removed, namely those associated with odd functions of V. The remaining expected values can be worked out recalling that the powers of quadratic forms can be expressed as the trace of matrix products, combined with properties of the trace of products of matrices, specifically that \({\mathrm{tr}}(K_{pq}(P^{\top }\otimes Q))= {\mathrm{tr}}(P^{\top }Q)\) as stated by Theorem 3.1, item (xiii), of Magnus and Neudecker (1979) and, in case \(p=q\), \({\mathrm{tr}}(P\otimes Q)={\mathrm{tr}}(P){\mathrm{tr}}(Q)\), \({\mathrm{tr}}(P^{\top }Q)=\mathop {{\mathrm {vec}}}\nolimits (P)^{\top }\mathop {{\mathrm {vec}}}\nolimits (Q)\). We then obtain

$$\begin{aligned} \beta _{2,d}= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{00}U_0U_0^{\top }M_{00}U_0U_0^{\top })}\right\} +2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{01} VV^{\top }M_{01}^{\top }U_0U_0^{\top })}\right\} \\&+2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{00}U_0U_0^{\top }){\mathrm{tr}}(M_{11}VV^{\top })}\right\} +2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{01}VV^{\top }M_{01}^{\top }U_0U_0^{\top })}\right\} \\&+ {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{11}VV^{\top }M_{11} VV^{\top })}\right\} \\= & {} {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (U_0U_0^{\top })^{\top }(M_{00}\otimes M_{00})\mathop {{\mathrm {vec}}}\nolimits (U_0U_0^{\top })}\right\} \\&+2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (U_0U_0^{\top })^{\top }(M_{01}\otimes M_{01})\mathop {{\mathrm {vec}}}\nolimits (VV^{\top })}\right\} \\&+2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{00}U_0U_0^{\top }){\mathrm{tr}}(M_{11}VV^{\top })}\right\} \\&+2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {{\mathrm{tr}}(M_{01} VV^{\top }M_{01}^{\top }U_0U_0^{\top })}\right\} \\&+ {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (VV^{\top })^{\top }(M_{11} \otimes M_{11} )\mathop {{\mathrm {vec}}}\nolimits (VV^{\top })}\right\} \\= & {} {\mathrm{tr}}\left[ (M_{00}\otimes M_{00}) {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (U_0U_0^{\top })\mathop {{\mathrm {vec}}}\nolimits (U_0U_0^{\top })^{\top }}\right\} \right] \\&+2\, {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (U_0U_0^{\top })^{\top }}\right\} (M_{01}\otimes M_{01}) {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (VV^{\top })}\right\} \\&+2\,{\mathrm{tr}}\left[ M_{00} {\mathbb {E}}_{}\!\left\{ \displaystyle {U_0U_0^{\top }}\right\} \right) {\mathrm{tr}}\left( M_{11} {\mathbb {E}}_{}\!\left\{ \displaystyle {VV^{\top }}\right\} \right] +2\,{\mathrm{tr}}\left[ M_{01} {\mathbb {E}}_{}\!\left\{ \displaystyle {VV^{\top }}\right\} M_{01}^{\top } {\mathbb {E}}_{}\!\left\{ \displaystyle {U_0U_0^{\top }}\right\} \right] \\&+{\mathrm{tr}}\left[ (M_{11}\otimes M_{11}) {\mathbb {E}}_{}\!\left\{ \displaystyle {\mathop {{\mathrm {vec}}}\nolimits (VV^{\top })\mathop {{\mathrm {vec}}}\nolimits (VV^{\top })^{\top }}\right\} \right] \\= & {} {\mathrm{tr}}\left\{ (M_{00}\otimes M_{00})\mu _4(U_0)\right\} +2\mathop {{\mathrm {vec}}}\nolimits \{\mu _2(U_0)\}^{\top }(M_{01} \otimes M_{01})\mathop {{\mathrm {vec}}}\nolimits \{\mu _2(V)\}\\&+2\,{\mathrm{tr}}\left( M_{00}\mu _2(U_0)\right) {\mathrm{tr}}\left( M_{11}\mu _2(V)\right) +2\,{\mathrm{tr}}(M_{01} \mu _2(V)M_{01}^{\top }\mu _2(U_0))\\&+{\mathrm{tr}}\left[ (M_{11}\otimes M_{11})\mu _4(V)\right] \,. \end{aligned}$$

Taking into account Lemma A.3 in an appendix, we can substitute \(\mu _2(V)=I_d\), \(\mathop {{\mathrm {vec}}}\nolimits \{\mu _2(V)\}=\mathop {{\mathrm {vec}}}\nolimits (I_d)\), \(\mu _4(V)=I_{d^2}+K_d+\mathop {{\mathrm {vec}}}\nolimits (I_d)\mathop {{\mathrm {vec}}}\nolimits (I_d)^{\top }\), and \(\mu _2(U_0)= \text {var}_{}\!\left\{ \displaystyle {U_0}\right\} \), leading to

$$\begin{aligned} \beta _{2,d}= & {} {\mathrm{tr}}\left[ (M_{00}\otimes M_{00})\mu _4(U_0)\right] +2\mathop {{\mathrm {vec}}}\nolimits \{\mu _2(U_0)\}^{\top }(M_{01} \otimes M_{01})\mathop {{\mathrm {vec}}}\nolimits (I_d)\\&+2\,{\mathrm{tr}}\left( M_{00}\mu _2(U_0)\right) {\mathrm{tr}}\left( M_{11}\right) +2\,{\mathrm{tr}}(M_{01} M_{01}^{\top })\\&+{\mathrm{tr}}\left\{ (M_{11}\otimes M_{11})(I_{d^2}+K_d)\right\} +{\mathrm{tr}}\left\{ (M_{11}\otimes M_{11})\mathop {{\mathrm {vec}}}\nolimits (I_d)\mathop {{\mathrm {vec}}}\nolimits (I_d)^{\top }\right\} \\= & {} {\mathrm{tr}}\left\{ \mu _4(U_0)(M_{00}\otimes M_{00})\right\} +4\,{\mathrm{tr}}(\mu _2(U_0)M_{01}M_{01}^{\top })\\&+2\,{\mathrm{tr}}(\mu _2(U_0)M_{00}){\mathrm{tr}}(M_{11}) +{\mathrm{tr}}(M_{11})^2+2\,{\mathrm{tr}}(M_{11}^2) \,, \end{aligned}$$

where \(M_{01}M_{01}^{\top }=\Lambda ^{\top }\Sigma ^{-1}\Psi \Sigma ^{-1}\Lambda \), \({\mathrm{tr}}(M_{11})={\mathrm{tr}}(\Psi \Sigma ^{-1})\) and \({\mathrm{tr}}(M_{11}^2)={\mathrm{tr}}(\Psi \Sigma ^{-1}\Psi \Sigma ^{-1})\). \(\square \)

3 Other properties

3.1 Log-concavity of the SUN distribution

The SN distribution is known to be log-concave, even in its extended version, ESN; see Azzalini and Regoli (2012) for a proof. Since the ESN distribution corresponds to the SUN with \(m=1\), it is natural to investigate the same property for a general value of m.

An often-employed definition of log-concave distribution in the continuous case requires that the logarithm of its density function is a concave function. In the more specialized literature, the concept of log-concavity is expressed via the corresponding probability measure, by requiring that

$$\begin{aligned} {\mathbb {P}}_{}\!\left\{ \displaystyle {\lambda A + (1-\lambda ) B}\right\} \ge {\mathbb {P}}_{}\!\left\{ \displaystyle {A}\right\} ^\lambda \, {\mathbb {P}}_{}\!\left\{ \displaystyle {B}\right\} ^{1-\lambda } \end{aligned}$$
(23)

for any two Borel sets A and B, and for any \(0<\lambda <1\). For general information on this theme, we refer to Chapter 2 of Dharmadhikari and Joag-dev (1988) and Chapter 4 of Prékopa (1995), which provide extensive compendia of a vast literature.

Established results ensure the equivalence of the definition of log-concavity based on the density function and the one in (23); see Theorems 4.2.1 of Prékopa (1995), and Theorem 2.8 of Dharmadhikari and Joag-dev (1988). Moreover, also the corresponding distribution function is a log-concave function; see Theorem and 4.2.4 II of Prékopa (1995).

Proposition 3

The SUN distribution is log-concave.

Proof

The proof is based on its additive representation in the form (19), which involves the underling variable \(Z_0\) indicated in (18). For the multivariate normal distribution, log-concavity is a well-known fact. Next, recall Theorem 9 of Horrace (2005) which ensures log-concavity of a normal distribution subject to one-sided truncation. In our case the truncation operates on the variable \(Z_0=(V^{\top }, W^{\top })^\top \) in the form \(W+\tau >0\). Since \(U{\mathop {=}\limits ^{d}}(W|W+\tau >0)\), this establishes log-concavity of the distribution of \((V^{\top }, U^{\top })\). A variable \(Y\sim \mathrm {SUN}{}_{d,m}(\xi ,\Omega ,\Delta , \tau , \Gamma )\) can be obtained from \((V^{\top }, U^{\top })\) by the affine transformation

$$\begin{aligned} Y = \xi + \begin{pmatrix} \Psi ^{1/2} &{} 0 \\ 0 &{} \Lambda \end{pmatrix} \begin{pmatrix} V \\ U \end{pmatrix}\,. \end{aligned}$$

Preservation of log-concavity after an affine transformation has been proved by Henningsson and Åström (2006). Strictly speaking, their statement refers to a transformation involving a square matrix, having dimension \(d+m\) in our notation, but it is easy to see that fact extends to reduced-dimension transformations, since one can think of a full-rank transformation to an augmented variable of dimension \(d+m\), followed by marginalization to extract the Y component. Since marginalization preserves log-concavity, as stated for instance by Theorem 4.2.2 of Prékopa (1995), this concludes the proof. \(\square \)

3.2 Conditional density generated by interval selection

For a random variable \(Y\sim \mathrm {SUN}{}_{d,m}(\xi , \Omega , \Delta , \tau , \Gamma )\), consider a partition of Y and its associated quantities, as follows

$$\begin{aligned} Y = \begin{pmatrix} Y_1 \\ Y_2\end{pmatrix}, \quad \xi = \begin{pmatrix} \xi _1 \\ \xi _2\end{pmatrix}, \quad \Omega = \begin{pmatrix} \Omega _{11} &{} \Omega _{12} \\ \Omega _{21} &{} \Omega _{22}\end{pmatrix}, \quad \omega = \begin{pmatrix} \omega _{1} &{} 0 \\ 0 &{} \omega _{2}\end{pmatrix},\quad \Delta = \begin{pmatrix} \Delta _1 \\ \Delta _2 \end{pmatrix} \end{aligned}$$
(24)

where \(Y_1\) and \(Y_2\) have dimension \(d_1\) and \(d_2\), with a corresponding partition for the scaled matrix \({{\bar{\Omega }}}\) which appears in (1) and (2).

In Proposition 2.3.2 of González-Farías et al. (2004b), it is proved that the conditional distribution of \(Y_2\) given that \(Y_1=y_1\), for any vector \(y_1\in {\mathbb {R}}^{d_1}\), is still of SUN type. Here, we want to examine another conditional distribution of \(Y_2\), namely the one which arises when the conditioning event on \(Y_1\) is instead an orthant-type interval of the form \((Y_1+y_1>0)\), where the inequality sign holds for each variable component, or some similar orthant-type condition.

Proposition 4

If \(Y\sim \mathrm {SUN}{}_{d,m}(\xi , \Omega , \Delta , \tau , \Gamma )\) with elements partitioned as indicated in (24), then

$$\begin{aligned} (Y_2 | Y_1+y_1>0)\sim \mathrm {SUN}{}_{d_2,d_1+m} \left( \xi _2,\Omega _{22},(\Delta _2,{{\bar{\Omega }}}_{21}), \begin{pmatrix} {\tilde{z}}_1 \\ \tau \end{pmatrix}, \begin{pmatrix} {{\bar{\Omega }}}_{11} &{}\Delta _1 \\ \Delta _1^{\top }&{}\Gamma \end{pmatrix} \right) \end{aligned}$$
(25)

where \({\tilde{z}}_1=\omega _1^{-1}(\xi _1+y_1)\) and the inequality sign must be intended to hold for each component of \(Y_1\), if \(d_1>1\). In the case where the inequality sign is reversed, we have

$$\begin{aligned} (Y_2 | Y_1+y_1<0)\sim \mathrm {SUN}{}_{d_2,d_1+m} \left( \xi _2,\Omega _{22},(\Delta _2, -{{\bar{\Omega }}}_{21}), \begin{pmatrix} {\hat{z}}_1 \\ \tau \end{pmatrix}, \begin{pmatrix} {{\bar{\Omega }}}_{11} &{}-\Delta _1\\ -\Delta _1^{\top }&{}\Gamma \end{pmatrix} \right) \end{aligned}$$
(26)

where \({\hat{z}}_1=\omega _1^{-1}(\xi _1-y_1)\).

Proof

Recall formula (4) for computing the distribution of an affine transformation of a SUN variable. Using these transformation rule, \(Z_j=\omega _j^{-1}(Y_j-\xi _j) \sim \mathrm {SUN}{}_{d_j,m}(0, {{\bar{\Omega }}}_{jj}, \Delta _j, \tau , \Gamma )\) for \(j=1,2\). Then, on setting \(z_2=\omega _2^{-1}(y_2-\xi _2)\), the conditional distribution function of \((Y_2|Y_1+y_1>0)\) evaluated at \(y_2\in {\mathbb {R}}^{d_2}\) is

$$\begin{aligned} F_{Y_2}(y_2|Y_1+y_1>0)= & {} {\mathbb {P}}_{}\!\left\{ \displaystyle {Y_2\le y_2\mid Y_1+y_1>0}\right\} \nonumber \\= & {} \frac{ {\mathbb {P}}_{}\!\left\{ \displaystyle {Y_1+y_1>0,Y_2\le y_2}\right\} }{ {\mathbb {P}}_{}\!\left\{ \displaystyle { Y_1+y_1>0}\right\} } \nonumber \\= & {} \frac{ {\mathbb {P}}_{}\!\left\{ \displaystyle {\xi _1+\omega _1 Z_1+y_1>0,Y_2\le y_2}\right\} }{ {\mathbb {P}}_{}\!\left\{ \displaystyle {\xi _1+\omega _1 Z_1+y_1>0}\right\} } \nonumber \\= & {} \frac{ {\mathbb {P}}_{}\!\left\{ \displaystyle {-Z_1<{\tilde{z}}_1,Z_2\le z_2}\right\} }{ {\mathbb {P}}_{}\!\left\{ \displaystyle {-Z_1<{\tilde{z}}_1}\right\} } \nonumber \\= & {} \frac{F_{-Z_1,Z_2}({\tilde{z}}_1,z_2)}{F_{-Z_1}({\tilde{z}}_1)}. \end{aligned}$$
(27)

where \(F_{X}(\cdot )\) denotes the distribution function of a SUN variable X, given by (5). Using again formula (4), write

$$\begin{aligned} \begin{pmatrix} -Z_1\\ Z_2\end{pmatrix} \sim \mathrm {SUN}{}_{d_1+d_2,m} \left( \begin{pmatrix} 0\\ 0\end{pmatrix}, \begin{pmatrix} {{\bar{\Omega }}}_{11}&{}-{{\bar{\Omega }}}_{12}\\ -{{\bar{\Omega }}}_{21}&{}{{\bar{\Omega }}}_{22}\end{pmatrix}, \begin{pmatrix} -\Delta _1\\ \Delta _2\end{pmatrix},\tau ,\Gamma \right) \end{aligned}$$
(28)

and

$$\begin{aligned}-Z_1\sim \mathrm {SUN}{}_{d_1,m}(0,{{\bar{\Omega }}}_{11},-\Delta _1,\tau ,\Gamma ) \,,\end{aligned}$$

so that the two ingredients of (27) are

$$\begin{aligned} F_{-Z_1,Z_2}({\tilde{z}}_1,z_2)=\frac{1}{\Phi _m(\tau ;\Gamma )}\, \Phi _{d_1+d_2+m}\left\{ \begin{pmatrix} {\tilde{z}}_1\\ z_2\\ \tau \end{pmatrix}; \begin{pmatrix} {{\bar{\Omega }}}_{11}&{}-{{\bar{\Omega }}}_{12} &{} \Delta _1\\ -{{\bar{\Omega }}}_{21}&{}{{\bar{\Omega }}}_{22} &{}-\Delta _2\\ \Delta _1^{\top }&{} -\Delta _2^{\top }&{} \Gamma \end{pmatrix}\right\} \end{aligned}$$

and

$$\begin{aligned} F_{-Z_1}({\tilde{z}}_1)=\frac{1}{\Phi _m(\tau ;\Gamma )}\, \Phi _{d_1+m}\left\{ \begin{pmatrix} {\tilde{z}}_1 \\ \tau \end{pmatrix}; \begin{pmatrix} {{\bar{\Omega }}}_{11} &{}\Delta _1\\ \Delta _1^{\top }&{}\Gamma \end{pmatrix}\right\} \,, \end{aligned}$$

Taking the ratio of the last two expressions, we obtain

$$\begin{aligned} F_{Y_2}(y_2 | Y_1+y_1>0)= & {} \frac{1}{\Phi _{d_1+m}\left\{ \begin{pmatrix} {\tilde{z}}_1 \\ \tau \end{pmatrix}; \begin{pmatrix} {{\bar{\Omega }}}_{11} &{}\Delta _1\\ \Delta _1^{\top }&{}\Gamma \end{pmatrix}\right\} }\;\times \\&\Phi _{d_1+d_2+m}\left\{ \begin{pmatrix} {\tilde{z}}_1\\ z_2\\ \tau \end{pmatrix}; \begin{pmatrix} {{\bar{\Omega }}}_{11}&{}-{{\bar{\Omega }}}_{12} &{} \Delta _1\\ -{{\bar{\Omega }}}_{21}&{}{{\bar{\Omega }}}_{22} &{}-\Delta _2\\ \Delta _1^{\top }&{} -\Delta _2^{\top }&{} \Gamma \end{pmatrix}\right\} \end{aligned}$$

which is the distribution function of a SUN variable with parameters indicated in (25), taking into account (5).

For statement (26), notice that the event \(\{Y_1+y_1<0\}\) coincides with \(\{-Y_1+(-y_1)>0\}\) and apply (25) to the distributions of \((-Y_1,Y_2)\) with \(y_1\) replaced by \(-y_1\). The distribution of \((-Y_1,Y_2)\) is essentially given by (28), up to a change of location and scale. \(\square \)

Clearly, the special case of Proposition 4 where \(m=1\) applies to the extended SN distribution. By following the same logic of Proposition 4, it is conceptually simple, although algebraically slightly intricate, to write the conditional distribution of \((Y_2|Y_1\in I)\) where I is an event specified by mixed-direction inequalities on the components of \(Y_1\).

3.3 An alternative parameterization

The currently used parameterization of a SUN distribution involves the triplet \((\Omega , \Delta , \Gamma )\) formed by matrices of respective size \(d \times d\), \(d\times m\), \(m\times m\). The components of this triplet cannot be chosen independently from each other since they must satisfy the requirement that

$$\begin{aligned} \Omega ^* = \begin{pmatrix} {{\bar{\Omega }}} &{} \Delta \\ \Delta ^{\top }&{} \Gamma \end{pmatrix} \end{aligned}$$
(29)

is a positive definite correlation matrix; here \({{\bar{\Omega }}}\) denotes the correlation matrix associated to \(\Omega \).

This scheme differs from the one the SN family, where the components of the pair \((\Omega , \alpha )\) can be selected separately from each other. For any choice of \((\Omega , \alpha )\), there exists a vector \(\delta \) such that the analogue of (29) is a positive definite matrix. Vice versa, for any such \(\Omega ^*\) we can determine the corresponding \((\Omega , \alpha )\) pair.

Our aim is to introduce an alternative parameterization of the SUN family, based on an alternative triplet \((\Omega , A, \Gamma )\) where the three components can be selected independently, in a logic similar to the SN case. The constraints on the individual components will remain identical, namely it is required that \(\Omega >0\), \(\Gamma \) is a correlation matrix with \(\Gamma >0\).

To accomplish our task we make use of the following matrix theory result, which we state in general form because of its wider applicability. Given two symmetric positive definite matrices, the results shows how to build a larger matrix with the original matrices on its diagonal, for any choice of an arbitrary additional matrix of conformable dimension.

For any symmetric positive definite matrix H, the expression \(H^{1/2}\) denotes the unique symmetric positive definite square root of H.

Lemma 5

Given symmetric positive definite matrices Q and S of respective size \(d\times d\) and \(m\times m\), and an arbitrary \(d\times m\) matrix B, the matrix

$$\begin{aligned} M = \begin{pmatrix} Q &{} R \\ R^{\top }&{} S\end{pmatrix} \end{aligned}$$
(30)

is positive definite if we define

$$\begin{aligned} R = Q\,B\, S^{1/2}\left( I_m+ S^{1/2}\,B^{\top }Q B\,S^{1/2}\right) ^{-1/2} S^{1/2}\,. \end{aligned}$$
(31)

Moreover, given a positive definite matrix M partitioned like in (30), where Q and S are square matrices, we can associate a matrix

$$\begin{aligned} B = Q^{-1}R\,S^{-1/2}\left( I_m- S^{-1/2}R^{\top }\,Q^{-1}RS^{-1/2}\right) ^{-1/2}S^{-1/2} \end{aligned}$$
(32)

such that the transformation of this matrix via (31) returns back the original matrix R in (30).

Proof

Since \(Q>0\), the fact that \(M>0\) is proved by showing that \(S-R^{\top }{Q^{-1}}R>0\) when R is as in (31). Introduce the auxiliary matrix \({{\tilde{B}}}= S^{1/2} B\) and expand

$$\begin{aligned} S-R^{\top }{Q^{-1}}R= & {} S - S^{1/2} (I_m + {{\tilde{B}}}^{\top }Q\, {{\tilde{B}}})^{-1/2} \tilde{B}^{\top }Q\,{{\tilde{B}}} (I_m + {{\tilde{B}}}^{\top }Q \,{{\tilde{B}}})^{-1/2} S^{1/2}\\= & {} S^{1/2}\left\{ I_m - (I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} )^{-1/2} {{\tilde{B}}}^{\top }Q {{\tilde{B}}} (I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} )^{-1/2}\right\} S^{1/2}\\= & {} S^{1/2}(I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}})^{-1/2}\{I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} - {{\tilde{B}}}^{\top }Q {{\tilde{B}}}\} (I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} )^{-1/2}S^{1/2}\\= & {} S^{1/2}(I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} )^{-1}S^{1/2}\\= & {} (S + B^{\top }Q B)^{-1}> 0. \end{aligned}$$

An alternative proof of the fact \(M>0\) works by recalling that \(S>0\) and by showing that

$$\begin{aligned} Q - R S^{-1}R^{\top }= & {} Q - Q {{\tilde{B}}} (I_m + {{\tilde{B}}}^{\top }Q \tilde{B})^{-1/2} S^{1/2} S^{-1}S^{1/2} (I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} )^{-1/2} {{\tilde{B}}}^{\top }Q\\= & {} Q - Q {{\tilde{B}}} (I_m + {{\tilde{B}}}^{\top }Q {{\tilde{B}}} )^{-1}{{\tilde{B}}}^{\top }Q\\= & {} (Q^{-1}+ {{\tilde{B}}} {{\tilde{B}}}^{\top })^{-1}= (Q^{-1}+ B S B^{\top })^{-1}> 0. \end{aligned}$$

To prove the second statement, we compute (31) when matrix B is as given in (32). For algebraic convenience, define \(C=S^{-1/2}R^{\top }Q^{-1}R S^{-1/2}\) and expand

$$\begin{aligned} R= & {} R S^{-1/2} (I_m-C)^{-1/2}\{I_m+(I_m-C)^{-1/2} C (I_m-C)^{-1/2}\}^{-1/2} S^{1/2}\\= & {} R S^{-1/2} (I_m-C)^{-1/2}\{(I_m-C)^{-1/2}(I_m-C+C) (I_m-C)^{-1/2}\}^{-1/2} S^{1/2}\\= & {} R S^{-1/2} (I_m-C)^{-1/2}\{(I_m-C)^{-1}\}^{-1/2} S^{1/2}\\= & {} R \,. \end{aligned}$$

\(\square \)

For the above-stated purpose in the SUN context, we use Lemma 5 as follows. Given two positive definite correlation matrices \({{\bar{\Omega }}}\) and \(\Gamma \) of size \(d\times d\) and \(m\times m\) and an arbitrary matrix A of size \(d\times m\), we compute the right-hand side of (31) with \(Q={{\bar{\Omega }}}\), \(S=\Gamma \) and \(B=A\). The resulting matrix provides a suitable term \(\Delta \) for (29). Note that, when \(m=1\), we obtain a well-known expression linking quantities of the SN family, as recalled in the second paragraph of the present subsection.