Skip to main content

On p-generalized elliptical random processes

Abstract

We introduce rank-k-continuous axis-aligned p-generalized elliptically contoured distributions and study their properties such as stochastic representations, moments, and density-like representations. Applying the Kolmogorov existence theorem, we prove the existence of random processes having axis-aligned p-generalized elliptically contoured finite dimensional distributions with arbitrary location and scale functions and a consistent sequence of density generators of p-generalized spherical invariant distributions. Particularly, we consider scale mixtures of rank-k-continuous axis-aligned p-generalized elliptically contoured Gaussian distributions and answer the question when an n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distribution is representable as a scale mixture of n-dimensional rank-k-continuous p-generalized Gaussian distribution for a suitable mixture distribution of a positive random variable. Based on this class of multivariate probability distributions, we introduce scale mixed p-generalized Gaussian processes having axis-aligned finite dimensional distributions being p-generalizations of elliptical random processes. Additionally, some of their characteristic properties are discussed and approximates of trajectories of several examples such as p-generalized Student-t and p-generalized Slash processes having axis-aligned finite dimensional distributions are simulated with the help of algorithms to simulate rank-k-continuous axis-aligned p-generalized elliptically contoured distributions.

Introduction

Random processes may be constructed and characterized in different ways. Apart from constructions via families of random variables whose members satisfy, e.g., specific autoregressive relations or are coefficients of specific series representations, the existence of random processes can be studied following the fundamental existence theorem due to Kolmogorov (1933). The explicit knowledge of the family of finite dimensional distributions (fdds) can be used then to establish some of the properties of the random process by proving corresponding ones of the fdds. Basic technical problems to be solved this way belong to multivariate distribution theory. In the present paper, Kolmogorov’s theorem is used to prove the existence of real valued random processes having axis-aligned p-generalized elliptically contoured (apec) fdds, thus being p-generalizations of elliptical random processes having axis-aligned fdds.

Well studied examples of random processes which can be constructed via Kolmogorov’s existence theorem are real valued Gaussian processes with emphasis on the Brownian motion, see Shiryaev (1996) and Schilling and Partzsch (2014). Apart from further examples as random processes with independent values, random processes with independent increments as well as Markov processes, spherically invariant random processes being also known as elliptical random processes can be constructed this way. The latter are introduced in Vershik (1964) as random processes consisting of quadratically integrable random variables such that if two of them have the same variance, they follow the same distribution. Corresponding characteristic functions and densities are determined in Blake and Thomas (1968). Yao (1973) and Kano (1994) characterize spherically invariant random processes by establishing that their families of fdds are what is called now scale mixtures of Gaussian distributions having one and the same mixture distribution. The notion of a scale mixture but is first introduced in Andrews and Mallows (1974) and, independently, Wise and jun Gallagher (1978) show that an elliptical random process can be represented as a product of a Gaussian process and a positive random variable being independent of it. Additionally, in Huang and Cambanis (1979), the structure of the space of all second order spherically invariant random processes is studied and used to solve nonlinear estimation problems. Finally, based on the concepts of expansive and semi-expansive sequences of elliptically contoured distributions and apart from analogue representation theorems in Yao (1973) and Kano (1994), a formula to determine the corresponding mixture distribution of the family of fdds of a spherically invariant random process is determined in Gómez-Sánchez-Manzano et al. (2006).

Besides a thematically assorted summary of several articles on the theory of spherically invariant random processes, numerous applications of these random processes such as modelings of bandlimited speech waveform, of radar clutters, of radio propagation disturbances and of equalization and array processing are dealt with in Yao (2003). Furthermore, the author discusses simulations of trajectories of spherically invariant random processes based on the work in Brehm and Stammler (1987), Conte et al. (1991), and Rangaswamy et al. (1995). More recent applications deal with fading models from spherically invariant random processes in Biglieri et al. (2015) and with MIMO radar target localization and performance evaluation under spherically invariant random process clutter in Zhang et al. (2017).

The notion of a scale mixture of Gaussian distributions is introduced in Andrews and Mallows (1974) as the distribution of the product of a Gaussian variable and an independent positive random variable. A multivariate generalization is given in Lange and Sinsheimer (1993). Using numerous equivalent definitions, scale mixtures of Gaussian distributions are also studied in West (1987), Gneiting (1997), Eltoft et al. (2006), Gómez-Sánchez-Manzano et al. (2006, 2008), and Hashorva (2012). According to Andrews and Mallows (1974), Lange and Sinsheimer (1993), and Gómez-Sánchez-Manzano et al. (2006), scale mixtures of Gaussian distributions are special cases of elliptically contoured distributions and an elliptically contoured distribution is a scale mixture of a Gaussian distribution if and only if the composition of its density generator and the square root function is completely monotone. Moreover, examples of scale mixtures of Gaussian distributions are Pearson type VII distributions, power exponential distributions as well as Slash distributions.

Applications of scale mixtures of Gaussian distributions are given in the fields of natural images, insurances and quantitative genetic in Wainwright and Simoncelli (2000), Choy and Chan (2003), and Gómez-Sánchez-Manzano et al. (2008). More recent applications are Gaussian scale mixture models for robust multivariate linear regression with missing data in Ala-Luhtala and Piché (2016), testing homogeneity in a scale mixture of Gaussian distributions in Niu et al. (2016), and adaptive robust regression with continuous Gaussian scale mixture errors in Seo et al. (2017).

For any choice of p>0, introducing the notion of a p-generalization of a spherically invariant random process means the transition from spherically contoured to ln,p-symmetric fdds, the transition from regular elliptically contoured to suitably introduced p-generalized elliptically contoured distributions and the associated consideration of suitable non-Euclidean instead of Euclidean geometries, respectively. To be more specific, a well-known example is the n-dimensional p-generalized (spherical) Gaussian distribution being introduced already in Subbotin (1923) and having the probability density function (pdf)

$$f(x) = \left(\frac{ p^{1-\frac{1}{p}} }{ 2\Gamma\!\left(\frac{1}{p}\right)} \right)^{n} \exp\left\{ -\frac{1}{p} \sum\limits_{i=1}^{n}{\left|x_{i}\right|^{p}} \right\}, \quad x = \left(x_{1},\ldots,x_{n}\right)^{\text{\texttt{T}}}\in\mathbb{R}^{n}, $$

and p-generalized Weibull, Pearson type II and Pearson type VII distributions are dealt with in Gupta and Song (1997). Additionally, a p-generalized spherical coordinate transformation, a p-generalized surface content measure as well as numerous p-generalized probability distributions and statistics such as p-generalized versions of the χ2-, Student and Fisher distributions are considered in Richter (2007); Richter (2009).

The more general class of continuous ln,p-symmetric distributions is studied in Arellano-Valle and Richter (2012), Kalke and Richter (2013), Müller and Richter (2016a, b, 2017a, b) as well as several references given there. In the present paper, we introduce a class of multivariate apec distributions containing both regular and singular distributions and covering the classes of continuous ln,p-symmetric and common axis-aligned elliptically contoured distributions.

For a nonempty index set \(I \subseteq \mathbb {R}\), a Polish space (E,ρ) and a family Q of probability measures on the product spaces \(\left (E^{J},\mathcal {B}^{J}\right)\) for nonempty finite subsets JI and the Borel sigma field \(\mathcal {B}\) on E with respect to ρ, if Q is projective on E, Kolmogorov’s existence theorem states the existence of a random process having time set I and state space E such that its family of fdds is equal to Q. The projectivity of Q on E can be shown by checking the consistency conditions in Kolmogorov (1956). This will be discussed for the particular case \(E=\mathbb {R}\) in “Sketch of proof” section. This way, we prove the existence of real valued random processes having apec fdds. Such random processes are p-generalizations of elliptical random processes having axis-aligned fdds. Moreover, for the special case of scale mixed p-generalized Gaussian processes having axis-aligned fdds, basic properties such as characteristic representations, stationary properties and specific closedness properties are studied and certain approximates of their trajectories are simulated. Preparing for these results, we prove firstly that an apec distribution can be represented by a scale mixture of the apec Gaussian distribution if and only if its density-like generator composed with the pth root function, is completely monotone and secondly that the corresponding mixture distribution is in a well defined way closely connected to the inverse Laplace-Stieltjes transform of this composition.

The paper is structured as follows. In “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section, n-dimensional apec distributions are introduced as location-scale generalizations of continuous ln,p-symmetric distributions, and some of their properties such as stochastic representations, moments and pdf-like representations are discussed. Furthermore, the pdfs of bivariate p-generalized spherical as well as of bivariate apec Gaussian distributions are visualized for several values of p>0. Our main result on the existence of p-generalizations of elliptical random processes is presented in “Main result” section. A sketch of its proof consisting of four basic steps is given in “Sketch of proof” section, and an approximate simulation of the trajectories of the new random processes is discussed in “Simulation” section. Examples illustrating the developed theory are studied in the fourth section. In “Scale mixtures of apec Gaussian distributions” section, scale mixtures of multivariate apec Gaussian distributions are introduced and some of their characteristic properties such as stochastic representations, moments, specific conditional distributions, and their connections to completely monotone functions are discussed. Random processes whose families of fdds are families of scale mixtures of multivariate apec Gaussian distributions with one and the same mixture distribution as well as some of their basic properties are studied in “Scale mixed p-generalized Gaussian processes having axis-aligned fdds” section. All proofs are given in “Proofs” section. For the sake of a better readability, the proofs of certain results are prepared by proving certain particular cases first. An algorithm to simulate arbitrary apec distributions and another one to particularly simulate scale mixtures of apec Gaussian distributions with an explicitly known mixture distribution are presented in Appendix 7.1. The latter one is used in Appendix 7.2 to simulate approximations of trajectories of p-generalized Student-t as well as p-generalized Slash processes having axis-aligned fdds. Finally, we remark that all figures presented here are made using the program MATLAB.

The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions

For each p>0 and \(n\in \mathbb {N}\), we denote the p-functional in \(\mathbb {R}^{n}\) by \(|x|_{p} = \left (\sum \limits _{i=1}^{n}{|x_{i}|^{p}} \right)^{\frac {1}{p}}\), \(x = (x_{1},\dots,x_{n})^{\text {\texttt {T}}} \in \mathbb {R}^{n}\), and the ln,p-generalized surface content of the ln,p-unit sphere \(S_{n,p} = \{ x\in \mathbb {R}^{n}\colon |x|_{p}=1 \}\) by ωn,p,

$$\omega_{n,p} = \frac{ \left(2\Gamma\left(\frac{1}{p}\right) \right)^{n} }{ p^{n-1}\Gamma\left(\frac{n}{p}\right)}. $$

Furthermore, a function g:[0,)→[0,) satisfying 0<In(g)< is called a density generating function of an n-variate distribution where \(I_{n}(g) = \int \limits _{0}^{\infty }{ r^{n-1} g(r) \:dr}\). An n-dimensional random vector \(X\colon \Omega \to \mathbb {R}^{n}\) on a probability space \((\Omega,\mathfrak {A},P)\) having the pdf \(\frac { g\left (\left |x\right |_{p}\right) }{ \omega _{n,p}\,I_{n}(g) }\), \(x\in \mathbb {R}^{n}\), is called continuous ln,p-symmetrically distributed with density generating function g. A density generating function g of a continuous ln,p-symmetric distribution satisfying \(I_{n}(g) = \frac {1}{\omega _{n,p}}\) is called a density generator (dg) and denoted by g(n,p). The pdf of the continuous ln,p-symmetric distribution with dg g(n,p) is g(n,p) (|x|p), \(x\in \mathbb {R}^{n}\), and the corresponding probability law is denoted by \(\Phi _{g^{(n,p)}}\phantom {\dot {i}\!}\). With a view to the special cases listed below, \(\Phi _{g^{(n,p)}}\phantom {\dot {i}\!}\) may also be called n-dimensional continuous p-generalized spherical distribution with dg g(n,p).

A well-known example of the latter type of probability distributions is the n-dimensional p-generalized (spherical) Gaussian distribution \(N_{n,p} = \Phi _{g_{PE}^{(n,p)}}\) where

$$g_{PE}^{(n,p)}(r) = \left(\frac{ p^{1-\frac{1}{p}} }{ 2\Gamma\!\left(\frac{1}{p}\right)} \right)^{n} \exp\left\{ -\frac{1}{p} r^{p} \right\}, \quad r\geq 0. $$

For visualizations of the pdf of this distribution for n{1,2} and several p>0, we refer to Kalke and Richter (2013) and Müller and Richter (2015). The class of continuous ln,2-symmetric distributions coincides with the class of n-variate continuous spherical distributions and Nn,2 is the n-dimensional standard Gaussian distribution. Numerous properties such as stochastic representations, moments, and marginal distributions and several types of dgs are discussed in Gupta and Song (1997), Richter (2009), Arellano-Valle and Richter (2012), and Müller and Richter (2016a).

Let \(\mu \in \mathbb {R}^{n}\) be a constant vector and D=diag(d1,…,dn) an n×n diagonal matrix having nonnegative diagonal entries and positive rank rk(Σ)=k. Moreover, let I1={i1,…,ik}{1,…,n} with |I1|=k and i1<i2<…<ik be the set of indices such that di>0 if iI1 and di=0 if iI2={1,…,n}I1. Let \(e_{i}^{(n)}\) denote the ith unit vector in \(\mathbb {R}^{n} 0_{n \times n}\) the n×n zero matrix, \(S_{1} = \text { diag }\left (d_{i_{1}},\ldots,d_{i_{k}}\right) \in \mathbb {R}^{k \times k}\), \(W_{1} = \left (e_{i_{1}}^{(n)} \cdots \, e_{i_{k}}^{(n)} \right) \in \mathbb {R}^{n \times k}\) and \(W_{2} \in \mathbb {R}^{n \times (n-k)}\) a matrix having columns \(e_{i}^{(n)}\) for all iI2, then,

$$W_{1}^{\text{\texttt{T}}} D W_{1} = S_{1} \qquad\text{ and }\qquad W_{2}^{\text{\texttt{T}}} D W_{2} = 0_{(n-k) \times (n-k)}. $$

Let \(\sqrt {S_{1}} = \text { diag }\left (\sqrt {d_{i_{1}}},\ldots,\sqrt {d_{i_{k}}}\right)\). The distribution of a random vector X satisfying the stochastic representation

$$ X \stackrel{d}{=} \mu + W_{1} \sqrt{S_{1}} Y \qquad \text{where } Y \sim \Phi_{g^{(k,p)}} $$
(1)

is called an n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured (kapec) distribution with location parameter μ, scaling matrix D and dg g(k,p) and is denoted by AECn,p (μ,D,g(k,p)). For simplicity, the distribution of such random vector X is just called apec distribution if its continuity and dimension as well as the rank of the diagonal matrix parameter D are contextually clear or play only a minor role.

Here and in what follows, X=dZ and XΨ mean that the random vectors X and Z follow the same distribution law and that the random vector X follows the distribution law \(\mathfrak {L}({X}) = \Psi \), respectively. In particular, for the special choice of μ and D to be the zero vector 0n and identity matrix In×n in \(\mathbb {R}^{n}\), respectively, we have \(AEC_{n,p}\!\left (0_{n},I_{n \times n},g^{(n,p)}\right) = \Phi _{g^{(n,p)}}\). For the special case of p=2, the class of AECn,2 (μ,D,g(k,2))-distributions is identical with the class of common n-variate axis-aligned elliptically contoured distributions. Furthermore, \(AEC_{n,p}\!\left (\mu,D,g_{PE}^{(k,p)}\right)\) is called n-dimensional kapec Gaussian distribution and is denoted ANn,p(μ,D). The family of apec distributions with full rank scaling matrices as well as their star-shaped extensions and certain aspects of their inferential applications are studied in Richter (2014, 2016, 2017).

Because of relation (1), a stochastic representation and properties of moments of n-dimensional kapec distributions stated in Lemmata 2.1 and 2.2 follow immediately from corresponding results of lk,p-symmetric distributions in Richter (2009) and Arellano-Valle and Richter (2012).

Lemma 2.1

Let XAECn,p (μ,D,g(k,p)) where rk(D)=k. Then, the random vector X satisf ies the stochastic representation

$$X \stackrel{d}{=} \mu + R \cdot W_{1} \sqrt{S_{1}} U_{p}^{(k)} $$

where the random vector \(U_{p}^{(k)}\) is k-dimensional p-generalized uniformly distributed on Sk,p, R and \(U_{p}^{(k)}\) are stochastically independent and R is a nonnegative random variable with pdf

Lemma 2.2

Let XAECn,p (μ,D,g(k,p)) where rk(D)=k. Then, \(\mathbb {E}(X) = \mu \) if Ik+1 (g(k,p)) is f inite and \(\mathrm {Cov(X)} = \sigma _{g^{(k,p)}}^{2} D\) if Ik+2 (g(k,p)) is f inite where the univariate variance component \(\sigma _{g^{(k,p)}}^{2}\phantom {\dot {i}\!}\) of \(\Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\) satisf ies \(\sigma _{g^{(k,p)}}^{2} = \frac {\Gamma \!\left (\frac {3}{p}\right)\Gamma \!\left (\frac {k}{p}\right)} {\Gamma \!\left (\frac {1}{p}\right)\Gamma \!\left (\frac {k+2}{p}\right)} \, \omega _{k,p} \, I_{k+2}(g^{(k,p)})\). The components of X are independent if and only if \(g^{(k,p)} = g_{PE}^{(k,p)}\).

The justification for calling \(\sigma _{g^{(n,p)}}^{2}\phantom {\dot {i}\!}\) the univariate variance component of \(\Phi _{g^{(n,p)}}\phantom {\dot {i}\!}\) is given by the following lemma with k=1. Examples of \(\sigma _{g^{(n,p)}}^{2}\phantom {\dot {i}\!}\) are given in Müller and Richter (2016b). Let us remark that, according to Arellano-Valle and Richter (2012), for k=1,…,n−1, the marginal dg \(g_{(n)}^{(k,p)}\) of an arbitrary k-dimensional marginal distribution of \(\Phi _{g^{(n,p)}}\phantom {\dot {i}\!}\) is

$$g_{(n)}^{(k,p)}(r) = \frac{\omega_{n-k,p}}{p} \int\limits_{r^{p}}^{\infty}{ \left(y-r^{p}\right)^{\frac{n-k}{p}-1} g^{(n,p)}\!\left(\sqrt[p]{y}\right) \:dy}, \quad r\in[0,\infty), $$

where the variability of the choice of the k marginal variables is established by the permutation invariance of \(\Phi _{g^{(n,p)}}\phantom {\dot {i}\!}\), see Müller and Richter (2016b).

Lemma 2.3

For k=1,…,n−1,

$$\sigma_{g_{(n)}^{(k,p)}}^{2} = \sigma_{g^{(n,p)}}^{2}. $$

Denoting \(M_{n}^{*} = [0,\pi)^{\times (n-2)} \times [0,2\pi)\) and \(M_{n} = [0,\infty) \times M_{n}^{*}\) for n≥2, let the ln,p-spherical coordinate transformation \(SPH_{p}^{(n)} \colon M_{n} \to \mathbb {R}^{n}\) be defined as in Richter (2007). Note that \(SPH_{p}^{(n)}\) is bijective a.e. in Mn and its inverse mapping as well as its Jacobian are explicitly known. The next lemma combines and states more precisely some earlier results and introduces a second stochastic representation of random vectors following the distribution AECn,p (μ,D,g(k,p)).

Lemma 2.4

Let XAECn,p (μ,D,g(k,p)) where rk(D)=k. Then, the random vector X satisf ies the stochastic representation

$$X \stackrel{d}{=} \mu + W_{1} \sqrt{S_{1}} \cdot SPH_{p}^{(k)}\!\left(R,\Psi_{1},\ldots,\Psi_{k-1}\right) $$

where the nonnegative random variables R,Ψ1,…,Ψk−1 are mutually stochastic independent having pdfs

Here, Np(ψ)=(|sin(ψ)|p+|cos(ψ)|p)1/p and fZ denotes the pdf of Z.

While the distribution AECn,p (μ,D,g(n,p)) is regular and has a pdf, the distribution AECn,p (μ,D,g(k,p)) is singular if rk(D)=k<n and may be characterized by a pdf-like representation as it was done in Khatri (1968) and Rao (1973, pp. 527-528) in case of singular normal distributions and in Arellano-Valle and Azzalini (2006, Appendix C) in case of singular unified skew-normal distributions. To this end, let \(U_{W_{2}^{\text {\texttt {T}}}}(\mu) = \{ x\in \mathbb {R}^{n} \colon W_{2}^{\text {\texttt {T}}} x = W_{2}^{\text {\texttt {T}}} \mu \}\) be a k-dimensional affine subspace in \(\mathbb {R}^{n}\) and \(\lambda _{U_{W_{2}^{\text {\texttt {T}}}}(\mu)}^{(k)}\) the k-dimensional Lebesgue measure defined on \(U_{W_{2}^{\text {\texttt {T}}}}(\mu)\).

Lemma 2.5

Let XAECn,p (μ,D,g(k,p)) where rk(D)=k. Then, the distribution of X has pdf-like representation

$$\begin{array}{*{20}l} & \frac{1}{\sqrt{d_{i_{1}} \cdot\ldots\cdot d_{i_{k}}}} \: g^{(k,p)}\!\left(\left| \sqrt{S_{1}}^{-1} W_{1}^{\text{\texttt{T}}} (x - \mu) \right|_{p} \right), \quad x\in\mathbb{R}^{n}, \end{array} $$
(2)
$$\begin{array}{*{20}l} \text{and} & W_{2}^{\text{\texttt{T}}} X = W_{2}^{\text{\texttt{T}}} \mu \qquad P-\text{a.s.} \end{array} $$
(3)

where the function given in (2) is interpreted as pdf in the space \(U_{V_{2}^{\text {\texttt {T}}}}(\mu)\) in which the whole probability mass of X is concentrated according to Eq. 3.

Lemma 2.5 can be read as follows. For XAECn,p (μ,D,g(k,p)), the orthogonal projection \(Y = \Pi _{U_{W_{2}^{\text {\texttt {T}}}}(\mu)}(X)\) of X into the subspace \(U_{W_{2}^{\text {\texttt {T}}}}(\mu)\), and any event \(B\in \mathfrak {B}^{n}\),

$$\begin{array}{*{20}l} P\!\left(X \in B \right) &= P\!\left(Y \in \left(B \cap U_{W_{2}^{\text{\texttt{T}}}}(\mu)\right) \right) \\ &= \frac{1}{\sqrt{d_{i_{1}} \cdot\ldots\cdot d_{i_{k}}}} \int\limits_{B}{ g^{(k,p)}\!\left(\left| \sqrt{S_{1}}^{-1} W_{1}^{\text{\texttt{T}}} (x - \mu) \right|_{p} \right) \: \lambda_{U_{W_{2}^{\text{\texttt{T}}}}(\mu)}^{(k)}(dx)} \end{array} $$
(4)

meaning that the probability measure induced by the random vector X, PX=AECn,p (μ,D,g(k,p)), is absolutely continuous with respect to \(\lambda _{U_{V_{2}^{\text {\texttt {T}}}}(\mu)}^{(k)}\). Thus, (2) is the Radon-Nikodym derivative of PX with respect to the Lebesgue measure \(\lambda _{U_{V_{2}^{\text {\texttt {T}}}}(\mu)}^{(k)}\) on the subspace \(U_{W_{2}^{\text {\texttt {T}}}}(\mu)\) of \(\mathbb {R}^{n}\). Because of (4), g(k,p) might be called density-like generator of AECn,p (μ,D,g(k,p)) if k<n. In particular, if rk(D)=n, then W1=In×n and W2 is not defined. Hence, Eq. 3 is not applicable and the function in (2) is the common pdf of the distribution AECn,p (μ,D,g(n,p)). An example is illustrated in Fig. 1.

Fig. 1
figure 1

Pdf of \(AN_{2,p}\!\left (\left (\protect \begin {array}{c} \underset {2}{-2} \protect \end {array}\right), \left (\protect \begin {array}{cc} \underset {0}{3} & \underset {6}{0} \protect \end {array}\right) \right)\). a\(p= \frac {1}{2}\)b p=1c p=2d p=3

At the end of this section, our consideration will be slightly extended in order to cover the case k=rk(D)=0 or, equivalently, D=0n×n. To this end, AECn,p (μ,0n×n,g(0,p)) is defined to be the Dirac distribution at \(\mu \in \mathbb {R}^{n}\) where g(0,p) is just a symbol to maintain previous notations.

While each finite dimensional distribution (fdd) of an elliptical process is elliptically contoured, in the next section the existence of random processes will be shown whose families of fdds consist of apec distributions.

Generalized elliptical random processes

3.1 Main result

In order to state our main result, we call a sequence \(g^{(p)} = \left (g^{(k,p)}\right)_{k\in \mathbb {N}}\) of dgs of continuous lk,p-symmetric distributions consistent if the following condition is satisfied for any \(k\in \mathbb {N}\) and almost all \(\left (x_{1},\ldots,x_{k}\right)^{\text {\texttt {T}}} \in \mathbb {R}^{k}\),

$$ \int\limits_{-\infty}^{\infty}{ g^{(k+1,p)}\!\left(\left|\left(x_{1},\ldots,x_{k},x_{k+1}\right)^{\text{\texttt{T}}}\right|_{p}\right) \:dx_{k+1}} = g^{(k,p)}\!\left(\left|\left(x_{1},\ldots,x_{k}\right)^{\text{\texttt{T}}}\right|_{p}\right). $$
(5)

For the particular case of this definition if p=2, we refer to Kano (1994). Moreover, for any nonempty subset I of \(\mathbb {R}\), any functions \(m \colon I \to \mathbb {R}\) and S:I→[0,), and any sequence \(g^{(p)} = \left (g^{(k,p)}\right)_{k\in \mathbb {N}}\) of dgs of continuous lk,p-symmetric distributions, let the family

$$\begin{array}{*{20}l} \bigcup\limits_{n\in\mathbb{N}} \bigcup\limits_{\substack{ \left\{t_{1},\ldots,t_{n}\right\} \subseteq I \\ \left|\left\{t_{1},\ldots,t_{n}\right\}\right|=n }} & \left\{ AEC_{n,p}\!\left(\mu,D,g^{(k,p)}\right) \colon \mu = \left(m(t_{1}),\ldots,m(t_{n})\right)^{\text{\texttt{T}}},\right.\\[-14pt] & \left.\hspace*{65pt} D=\text{ diag }\left(S(t_{1}),\ldots,S(t_{n}) \right), k=\text{rk(D)} {\vphantom{AEC_{n,p}\!\left(\mu,D,g^{(k,p)}\right)}}\right\} \end{array} $$

of apec distributions having dgs from g(p) and location and scale functions m and S, respectively, be denoted by \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\). Note that strict positivity of S yields a family \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) containing only regular distributions. In difference to this, allowing S to be nonnegative, the family \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) consists both of regular and singular distributions. In particular, the univariate member of this family corresponding to tI such that S(t)=0 is AEC1,p (m(t),0,g(0,p)), i.e. an univariate kapec distribution with k=0.

Theorem 4.1

If g(p) is consistent, then \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) is projective on \(\mathbb {R}\).

Corollary 3.1

According to the Kolmogorov existence theorem, for any nonempty subset I of \(\mathbb {R}\), functions \(m \colon I \to \mathbb {R}\) and S:I→[0,), and consistent sequence g(p), Theorem 4.1 yields the existence of a real-valued random process having \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) as its family of fdds.

A random process defined according to Theorem 4.1 and Corollary 3.1 is called random process having apec fdds with location and scale functions m and S, respectively, and sequence g(p) of dgs of continuous lk,p-symmetric distributions. Such random process is denoted by \(\mathop {AEC\!P}_{p}\!\left (m,S;g^{(p)}\right)\).

3.2 Sketch of proof

Because of the complexity of the proof of Theorem 4.1, we first give a sketch of its principal ideas. For the outline of details of proof, we refer to “Proof of Theorem 4.1” section. The first step and fundamental argument to prove Theorem 4.1 and thus the existence of the random processes according to Corollary 3.1 is to show that the family \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) satisfies Kolmogorov’s consistency conditions. Let the set of all finite and nonempty subsets of I be denoted by \(\mathcal {H}(I)\), \(\mathcal {H}(I) = \left \{ J \subseteq I \colon J\neq \emptyset,\, \left |J\right |<\infty \right \}\). According to Kolmogorov (1956), a family \(Q = \left \{ Q_{J} \right \}_{\{J\in \mathcal {H}(I)\}}\) of probability measures on \(\left (\mathbb {R}^{|J|},\mathfrak {B}^{|J|}\right)\), \(J\in \mathcal {H}(I)\), is projective on \(\mathbb {R}\) if the following two conditions are satisfied:

  1. 1)

    For all t1,…,tn,tn+1I being pairwise distinct and \(A^{(n)} \in \mathfrak {B}^{n}\),

    $$ Q_{\{ t_{1},\ldots,t_{n},t_{n+1} \}}\!\left(A^{(n)} \times E \right) = Q_{\{ t_{1},\ldots,t_{n} \}}\!\left(A^{(n)} \right). $$
    (6)
  2. 2)

    For all t1,…,tnI, \(A^{(n)} \in \mathfrak {B}^{n}\) being pairwise distinct and every permutation π of {1,…,n},

    $$ Q_{\{ t_{1},\ldots,t_{n} \}}\!\left(A^{(n)} \right) = Q_{\{ t_{\pi(1)},\ldots,t_{\pi(n)} \}}\!\left(A_{\pi}^{(n)} \right) $$
    (7)

    where \(A_{\pi }^{(n)} = \left \{ (x_{\pi (1)},\ldots,x_{\pi (n)})^{\text {\texttt {T}}} \colon \left (x_{1},\ldots,x_{n}\right)^{\text {\texttt {T}}} \in A^{(n)} \right \}\).

These two conditions are traditionally formulated using the notion of ordered sets which are assumed to have different elements, i.e. the sets {t1,t2} and {t2,t1} differ from each other if t1t2, whereas (7) is not required in case of considering unordered sets, see Shiryaev (1996, p. 168).

Condition (6) ensures that specific marginal distributions of elements of the family Q are elements of this family, too. Proving (6) for the family given in Theorem 4.1 will be done in steps two and three. Since both of them are connected with transitions from joint to marginal distributions, we will use the notion of marginal dgs \(g_{(k)}^{(m,p)}\), m=1,…,k−1, according to “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section. Additionally, let \(g_{(k)}^{(k,p)} = g^{(k,p)}\). Making use of the marginal dg, in step two an equivalent formulation of (5) is given in the next lemma.

Lemma 3.1

A sequence \(g^{(p)} = \left (g^{(k,p)}\right)_{k\in \mathbb {N}}\) of dgs of continuous lk,p-symmetric distributions is consistent if and only if for any \(k\in \mathbb {N}\)

$$g_{(k+1)}^{(k,p)} = g^{(k,p)} \qquad\text{a.e. in} [0,\infty). $$

As a consequence, a sequence g(p) of dgs of continuous lk,p-symmetric distributions is consistent if and only if for any \(k\in \mathbb {N}\) the marginal dg \(g_{(k+1)}^{(k,p)}\) corresponding to the (k+1)th element g(k+1,p) of g(p) coincides with the kth element g(k,p). In the third step, for mn, m-dimensional marginal distributions of n-dimensional apec distributions are shown to be m-dimensional apec distributions with suitably modified vector and matrix parameters and transitions to marginal dgs.

Lemma 3.2

For \(\mu = \left (\mu _{1},\ldots,\mu _{n}\right)^{\text {\texttt {T}}}\in \mathbb {R}^{n}\) and D= diag (d1,…,dn) having nonnegative diagonal entries and rank k≥0, let X=(X1,…,Xn)TAECn,p (μ,D,g(k,p)). Further, let \(m\in \mathbb {N}\) with mn, J={j1,…,jm}{1,…,n} with j1<…<jm, and \(X_{J} = \left (X_{j_{1}},\dots,X_{j_{m}} \right)^{\text {\texttt {T}}}\) the corresponding m-dimensional subvector of X. Then,

$$X_{J} \sim AEC_{m,p}\!\left(\mu_{J},D_{J},g_{(k)}^{(k_{J},p)}\right) $$

where \(\mu _{J} = \left (\mu _{j_{1}},\ldots,\mu _{j_{m}}\right)^{\text {\texttt {T}}}\), \(D_{J} = \text { diag }\!\left (d_{j_{1}},\ldots,d_{j_{m}} \right)\), and kJ=rk(DJ)≥0.

In the final step four, condition (7) ensures that the considered family of probability distributions is big enough in a suitable sense. Its proof in case of \(Q = \mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) is based on the next lemma on distributions of specific linear transformations of random vectors following an apec distribution.

Lemma 3.3

Let XAECn,p (μ,D,g(k,p)) with rk(D)=k≥0. Then, for every (n×n)-permutation matrix M and every \(b \in \mathbb {R}^{n}\),

$$\mathfrak{L}(MX + b) = AEC_{n,p}\!\left(M\mu + b, M D M^{\text{\texttt{T}}}, g^{(k,p)} \right). $$

These sketched four steps to prove Theorem 4.1 are outlined in detail in “Proof of Theorem 4.1” section in reverse order. At the end of the present section, we consider an example of random processes being defined by Theorem 4.1 and Corollary 3.1. More general examples are studied in “Scale mixtures and particular p-generalizations of elliptical random processes” section.

Example 3.1

Let \(g_{PE}^{(p)} = \left (g_{PE}^{(k,p)}\right)_{k\in \mathbb {N}}\) be the sequence of all dgs of multivariate p-generalized Gaussian distributions. Then, the consistency of \(g_{PE}^{(p)}\) is immediately seen and for any nonempty subset I of \(\mathbb {R}\) and any functions \(m \colon I \to \mathbb {R}\) and S:I→[0,), Theorem 4.1 yields the existence of the real-valued random process AGPp(m,S) having \(\mathcal {AEC}_{g_{PE}^{(p)}}^{I}(m,S)\) as its family of fdds. Such stochastic process is called p-generalized Gaussian process having axis-aligned fdds.

3.3 Simulation

In order to simulate a random process X having apec fdds, we consider I=[0,1], simulate the marginal vector of X regarding to the equidistant partition \(\left \{ \frac {i}{200} \colon i=0,\ldots,200 \right \}\) of [0,1] to get a realization of the random vector \(\left (X_{0},X_{\frac {1}{200}},\ldots,X_{\frac {199}{200}},X_{1}\right)^{\text {\texttt {T}}}\). Then, we connect the components of this realization in ascending order by linear functions to get an approximate realization of a trajectory of X. Since components of apec Gaussian distributed random vectors are independent, simulation of the random process AGPp(m,S) according to the method described above is just the simulation of 201 univariate p-generalized Gaussian variables having specific location and scale parameters. We denote functions on [0,1] taking constant values 0 and 1 by 0[0,1] and 1[0,1], respectively. Results of the simulation of the random process AGPp (0[0,1],1[0,1]) are shown for \(p\in \left \{\frac {1}{2},1,2,3\right \}\) in Fig. 2. Note that scales of axes are highly dependent on the value of p, but also on the specific realization of a trajectory of the process. Moreover, in Fig. 3, the effect different location and scale functions m and S have on simulations of AGP3(m,S) are shown. See also Appendix 7.2 for several other simulations of random processes having apec fdds.

Fig. 2
figure 2

Simulation of AGPp (0[0,1],1[0,1]). a\(p= \frac {1}{2}\)b p=1c p=2d p=3

Fig. 3
figure 3

a Location functions. Simulation of AGP3(m,1[0,1])b m=m1c m=m2d m=m3e Scale functions. Simulation of AGP3(0[0,1],S)f S=S1g S=S2h S=S3

Scale mixtures and particular p-generalizations of elliptical random processes

4.1 Scale mixtures of apec Gaussian distributions

Let be \(\mu \in \mathbb {R}^{n}\), \(D\in \mathbb {R}^{n \times n}\) a diagonal matrix having nonnegative diagonal elements and rank k≥0, V a positive random variable, and ZANn,p(0n,D) independent of V. Furthermore, let G denote the cumulative distribution function (cdf) of V. Then, the distribution of an n-dimensional random vector X satisfying the stochastic representation

$$ X \stackrel{d}{=} \mu + V^{-\frac{1}{p}} \cdot Z $$
(8)

is called scale mixture of the n-dimensional kapec Gaussian distribution with parameters μ and D and with mixture cdf G and is denoted by SMANn,p(μ,D,G).

The particular cases SMAN1,2(0,1,G), SMANn,2(μ,D,G) with full rank matrix D, and SMNn,p(G)=SMANn,p(0n,In×n,G) are introduced in Andrews and Mallows (1974), Lange and Sinsheimer (1993), and Arellano-Valle and Richter (2012), respectively, where numerous equivalent parameterizations of scale mixtures of the common multivariate Gaussian distribution and different notions such as normal/independent distributions or variance mixtures of Gaussian distribution are used. As a first characterization of the class of SMANn,p(μ,D,G)-distributions, its connections to the classes of SMNn,p(G)- and AECn,p(μ,D,g(k,p))-distributions are studied next.

Lemma 4.1

A random vector \(X\colon \Omega \to \mathbb {R}^{n}\) satisf ies XSMANn,p(μ,D,G)with k=rk(D)≥1 if and only if

$$ X \stackrel{d}{=} \mu + W_{1} \sqrt{S_{1}} \tilde{X} \qquad\text{ where } \tilde{X} \sim SMN_{k,p}(G). $$

Corollary 4.1

There holds \(SMAN_{n,p}(\mu,D,G) = AEC_{n,p}\!\left (\mu,D,g_{SMN;G}^{(k,p)}\right)\) with k=rk(D) and

$$g_{SMN;G}^{(k,p)}(r) = \left(\frac{ p^{1-\frac{1}{p}} }{ 2\Gamma\!\left(\frac{1}{p}\right)} \right)^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:dG(v)}, \quad r \geq 0. $$

As a result, scale mixtures of kapec Gaussian distributions are themselves kapec. Moreover, many properties of such scale mixtures (such as stochastic representations according to Lemmata 2.1 and 2.4) can be obtained from properties of n-dimensional kapec distributions by specializing dgs (according to that given in Corollary 4.1). Additionally, some properties as the first two moments of SMANn,p(μ,D,G) can be specialized as follows.

Corollary 4.2

Let XSMANn,p(μ,D,G) with k=rk(D)≥1 and VG. Then, \(\mathbb {E}(X)=\mu \) if \(\mathbb {E}\left (V^{-\frac {1}{p}}\right)\) is f inite, and \(\mathrm {Cov(X)} = \sigma _{g_{SMN;G}^{(k,p)}}^{2} D\) if \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is f inite where

$$\sigma_{g_{SMN;G}^{(n,p)}}^{2} = p^{\frac{2}{p}} \frac{ \Gamma\!\left(\frac{3}{p}\right) }{ \Gamma\!\left(\frac{1}{p}\right)} \mathbb{E}\left(V^{-\frac{2}{p}}\right). $$

Because of the assertion of the following lemma, SMANn,p(μ,D,G) can be called a variance mixture of ANn,p(μ,D). In the special case of μ=0n, D=In×n and p=2, the following lemma is covered by the main theorem in Kingman (1972).

Lemma 4.2

Let XSMANn,p(μ,D,G) with k=rk(D)≥1 and VG a positive random variable. Then, the conditional distribution of X given V=v satisf ies

$$\mathfrak{L}(X \bigm| V=v) = AN_{n,p}\!\left(\mu, v^{-\frac{2}{p}} D \right), \quad v > 0. $$

According to Corollary 4.1, each scale-mixture of the n-dimensional apec Gaussian distribution is an n-dimensional apec distribution with a specific dg. Now, we are interested in which AECn,p (μ,D,g(k,p))-distributions can be represented by scale mixtures of the n-dimensional apec Gaussian distribution. This question is answered by the following theorem using the notion of completely monotone functions on [0,). A function \(f\colon (0,\infty)\to \mathbb {R}\) is called completely monotone if its restriction f=f|(0,) to (0,) is completely monotone, i.e. f is infinitely often differentiable and satisfies the inequality \(\left (-1\right)^{m} \frac {d^{m}f}{dx^{m}}(z) \geq 0\) for all z(0,) and all \(m\in \mathbb {N}_{0}=\mathbb {N}\cup \{0\}\), see Sasvári (2013).

Theorem 4.1

Let XAECn,p (μ,D,g(k,p)) with D having positive rank k. Then, XSMANn,p(μ,D,G) for the cdf G of a suitable positive random variable if and only if the function h def ined by \(h(y) = g^{(k,p)}\!\left (\sqrt [p]{y}\right)\), y[0,), is completely monotone.

For the special case of n=1 and p=2, this theorem is proven in Andrews and Mallows (1974). Subsequently, the Euclidean case p=2 of Theorem 4.1 in arbitrary dimensions (\(n\in \mathbb {N}\)) is proven in Lange and Sinsheimer (1993) and Gómez-Sánchez-Manzano et al. (2006). Particularly, the proof of Theorem 4.1 given in “Proofs regarding to “Scale mixtures of apec Gaussian distributions” sectionScale mixtures of apec Gaussian distributions” section has analogies to that in Andrews and Mallows (1974) and the cdf G of the corresponding mixture distribution can be determined with the help of the inverse Laplace-Stieltjes transform of h.

Corollary 4.3

Let XAECn,p (μ,D,g(k,p)) with k=rk(D)≥1 and assume that the function \(y \mapsto g^{(k,p)}\!\left (\sqrt [p]{y}\right)\) is completely monotone in (0,) and has the inverse Laplace-Stieltjes transform α, that is

$$g^{(k,p)}\!\left(\sqrt[p]{y}\right) = \int\limits_{0}^{\infty}{ e^{-yt} \:d\alpha(t)}, \quad y>0. $$

Then, XSMANn,p(μ,D,G) and the cdf G of the mixture distribution satisf ies the representation

$$\alpha(t) = \frac{p}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} \int\limits_{1}^{t}{ z^{\frac{k}{p}} \:dG(pz)}, \quad t>0. $$

Moreover, the probability law corresponding to G is regular and has pdf fG if and only if α is absolutely continuous with pdf fα and both pdfs are connected by the equation

Example 4.1

An n-dimensional apec Gaussian distribution is a scale mixture of itself with the Dirac distribution in 1 being the mixture distribution. The cdf of this Dirac distribution is the indicator function \(s \mapsto \mathbbm{1}_{(1,\infty)}(s)\).

Example 4.2

The n-dimensional kapec Pearson-type VII distribution with parameters M and ν, \(M > \frac {k}{p}\) and ν>0, and dg

$$ g_{PT7;M,\nu}^{(k,p)}(r) = \left(\frac{ p }{ 2\Gamma\left(\frac{1}{p}\right) }\right)^{k} \frac{ \Gamma(M) }{ \nu^{\frac{k}{p}}\Gamma\left(M-\frac{k}{p}\right)} \, \left(1+\frac{r^{p}}{\nu} \right)^{-M}, \quad r \geq 0, $$

is the scale mixture of the n-dimensional kapec Gaussian distribution where the mixture distribution is the Gamma distribution \(\Gamma _{M-\frac {k}{p},\frac {\nu }{p}}\) having pdf

Example 4.3

A special case of the preceding one is the n-dimensional kapec Student-t distribution with parameter ν>0 and dg \(g_{St;\nu }^{(k,p)} = g_{PT7;\frac {\nu +k}{p},\nu }^{(k,p)}\) being that of the scale mixture of the n-dimensional kapec Gaussian distribution with mixture distribution \(\Gamma _{\frac {\nu }{p},\frac {\nu }{p}}\).

Example 4.4

The n-dimensional kapec Slash distribution with parameter ν>0 is def ined as the scale mixture of the n-dimensional kapec Gaussian distribution with mixture distribution having pdf \(f_{\nu }^{Sl}(y) = \nu y^{\nu -1} \mathbbm{1}_{(0,1)}(y)\), \(y\in \mathbb {R}\).

4.2 Scale mixed p-generalized Gaussian processes having axis-aligned fdds

Let \(g_{SMN;G}^{(p)} = \left (g_{SMN;G}^{(k,p)}\right)_{k\in \mathbb {N}}\) denote the sequence of dgs of scale mixtures of k-dimensional p-generalized Gaussian distributions with one and the same mixture cdf G with

$$ G \text{ is independent of the index variable} k \text{ in } g_{SMN;G}^{(k,p)}. $$
(9)

According to Examples 4.1-4.4, representatives of mixture cdfs satisfying (9) are the Dirac distribution in 1, \(\Gamma _{\frac {\nu }{p},\frac {\nu }{p}}\) as well as the distribution with pdf \(f_{\nu }^{Sl}\), whereas the cdf of the distribution \(\Gamma _{M-\frac {k}{p},\frac {\nu }{p}}\) does not generally satisfy (9).

Lemma 4.3

For the cdf G of a positive random variable satisfying (9), the sequence \(g_{SMN;G}^{(p)}\) is consistent.

Throughout this section, again let I be a nonempty subset of \(\mathbb {R}\), \(m \colon I \to \mathbb {R}\) and S:I→[0,) arbitrary functions, and G the cdf of a positive random variable satisfying (9). Then, a random process having apec fdds, location and scale functions m and S, respectively, and the sequence \(g_{SMN;G}^{(p)}\) of dgs exists according to Theorem 4.1 and Corollary 3.1. Such process is called a scale mixed p-generalized Gaussian process having axis-aligned fdds with location function m, scale function S and mixture cdf G and is denoted by SMAGPp(m,S,G), thus \(AECP_{p}\!\left (m,S;g_{SMN;G}^{(p)}\right) = SMAGP_{p}(m,S,G)\). The motivation and justification of this naming is given by a characterizing property of such processes in Theorem 4.2 below.

On the one hand, for the special case p=2, the class of SMAGPp(m,S,G)-processes is equal to the class of spherically invariant random processes having axis-aligned fdds which is defined in Vershik (1964). Moreover, it is shown implicitly in Yao (1973) and explicitly in Kano (1994) that a sequence g(2) is consistent if and only if all elements of g(2) are dgs of scale mixtures of multivariate Gaussian distributions regarding to one and the same mixture distribution. On the other hand, for general p>0, if the mixture distribution is chosen to be the Dirac distribution in 1, then \(SMAGP_{p}\!\left (m,S,\mathbbm{1}_{(1,\infty)}\right) = AGP_{p}(m,S)\). Furthermore, for any ν>0, let us denote the cdf of \(\Gamma _{\frac {\nu }{p},\frac {\nu }{p}}\) and of the distribution with pdf \(f_{\nu }^{Sl}\) by \(G_{\nu /p}^{St}\) and \(G_{\nu }^{Sl}\), respectively. Then, \(SMAGP_{p}\!\left (m,S,G_{\nu /p}^{St}\right)\) and \(SMAGP_{p}\!\left (m,S,G_{\nu }^{Sl}\right)\) are called p-generalized Student-t and p-generalized Slash process having axis-aligned fdds with location function m, scale function S and parameter ν, and are denoted by AStPp(m,S,ν) and ASlPp(m,S,ν), respectively.

Because of its construction, a scale mixed p-generalized Gaussian process X having axis-aligned fdds with location function m, scale function S and mixture cdf G is uniquely determined except for equivalence and denoted XSMAGPp(m,S,G). Next, we state a characteristic representation of the random process SMAGPp(m,S,G) with the help of a specific p-generalized Gaussian process providing the motivation for the naming of such process.

Theorem 4.2

Let X={Xt}tI be a scale mixed p-generalized Gaussian process having axis-aligned fdds, XSMAGPp(m,S,G). Then, X and \(Y = \left \{ m(t) + V^{-\frac {1}{p}} Z_{t} \right \}_{t \in I}\) are equivalent where the p-generalized Gaussian process Z={Zt}tIAGPp(0I,S) having axis-aligned fdds is independent of the random variable VG.

For p=2 and m=0I, Theorem 4.2 is proven in Wise and jun Gallagher (1978). In the sequel, using the characteristic representation from Theorem 4.2, we determine expectation and covariance functions as well as stationarity properties of the random process SMAGPp(m,S,G). Since SMAGPp(m,0I,G) equals a.s. the location function m, the results of Theorems 4.3 and 4.5 below are restricted to non-vanishing scale functions, i.e. S≠0I. Let \(g_{SMN;G}^{(p)} = \left (g_{SMN;G}^{(k,p)}\right)_{k\in \mathbb {N}}\) be the sequence of dgs of scale mixtures of multivariate p-generalized Gaussian distributions with one and the same mixture cdf G such that \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is finite where VG. Then, because of Corollary 4.2 and property (9) of G, the sequence \(\left (\sigma _{g_{SMN;G}^{(k,p)}}^{2} \right)_{k\in \mathbb {N}}\) of the corresponding univariate variance components is constant and an arbitrary element of it is subsequently denoted by \(\sigma _{g_{SMN;G}^{(p)}}^{2}\phantom {\dot {i}\!}\).

Theorem 4.3

Let X={Xt}tISMAGPp(m,S,G) with S≠0I and VG. Then, the expectation function of the random process X exists and is equal to the location function m if \(\mathbb {E}\left (V^{-\frac {1}{p}}\right)\) is f inite. If \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is f inite, X is a second order random process with covariance function \(\Gamma \colon I \times I \to \mathbb {R}\) given by

$$\Gamma(s,t) = \left\{ \begin{array}{ll} \sigma_{g_{SMN;G}^{(p)}}^{2} \cdot S(t) & \text{ if } s=t \\ 0 & \text{else} \end{array}.\right. $$

As announced before, different stationarity properties of the random process SMAGPp(m,S,G) are studied now. We start with a result on strict stationarity.

Theorem 4.4

Let X={Xt}tISMAGPp(m,S,G). Then, X is strictly stationary if and only if m and S are constant.

In the following theorem, we additionally take the notions of weak stationarity and white noise into consideration.

Theorem 4.5

Let X={Xt}tISMAGPp(m,S,G), VG, \(\mu \in \mathbb {R}\) and δ>0. Then, the following statements are equivalent:

  1. 1)

    There holds m(t)=μ and S(t)=δ for all tI and \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is finite.

  2. 2)

    X is strictly stationary, \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is finite, the expectation function of X attains the constant value μ and the covariance function Γ of X satisfies \(\Gamma (t,t) = \sigma _{g_{SMN;G}^{(p)}}^{2} \delta \) for all tI and Γ(s,t)=0 for all s,tI with st.

  3. 3)

    X is weakly stationary with constant expectation μ and covariance function Γ given by Γ(s,t)=K(st) where K satisfies \(K(0) = \sigma _{g_{SMN;G}^{(p)}}^{2} \delta \) and K(h)=0 for all h{st:s,tI}{0}.

  4. 4)

    X is white noise with expectation μ and variance \(\sigma _{g_{SMN;G}^{(p)}}^{2} \delta \).

Finally, we establish the closedness of the class of all scale mixed p-generalized Gaussian processes having axis-aligned fdds with respect to linear transformations.

Theorem 4.6

Let {Xt}tISMAGPp(m,S,G), \(b \colon I \to \mathbb {R}\) and \(\gamma \colon I \to \mathbb {R}\). Then,

$$\left\{ \gamma(t) X_{t} + b(t) \right\}_{t \in I} \sim SMAGP_{p}\left(\gamma m + b, \gamma^{2} S, G \right), $$

where \(\lbrack \gamma m + b\rbrack \colon I \to \mathbb {R}\) and [γ2S]:I→[0,) are def ined by [γm+b](t)=γ(t)m(t)+b(t), tI, and [γ2S](t)=(γ(t))2S(t), tI, respectively.

Proofs

5.1 Proofs of Lemmata 2.3 and 2.5

Before proving Lemma 2.3, we state a part of its proof as the following remark on the p-generalized surface content of p-generalized spheres of different dimensions in relation with a certain integral.

Remark 5.1

For every \(\nu \in \mathbb {N}\) with ν≥2 and every κ{1,…,ν−1},

$$\frac{ \omega_{\kappa,p} \, \omega_{\nu-\kappa,p} }{ \omega_{\nu,p}} \int\limits_{0}^{\frac{\pi}{2}}{ \frac{ \left(\cos(\psi)\right)^{\nu-\kappa-1} \left(\sin(\psi)\right)^{\kappa-1} } { \left(\left(\sin(\psi)\right)^{p}+\left(\cos(\psi)\right)^{p}\right)^{\frac{\nu}{p}}} \:d\varphi } = 1. $$

According to Richter (2009), the left hand side of the above equation is the limit of the cdf of the p-generalized Fisher statistic Tνκ,κ(p) evaluated at t as t. Hence, Remark 5.1 follows from the elementary fact that the cdf of a univariate random variable evaluated at t tends to one as t.

Proof of Lemma 2.3

Let k{1,…,n−1} be fixed. Denoting \(\tau _{k,p} = \frac { \Gamma \!\left (\frac {3}{p}\right)\Gamma \!\left (\frac {k}{p}\right) }{\Gamma \!\left (\frac {1}{p}\right)\Gamma \!\left (\frac {k+2}{p}\right) }\), using integral transformation y=zp+rp with \(\frac {dy}{dz} = pz^{p-1}\) and finally renaming r and z by x and y, respectively, we get

$$\begin{array}{*{20}l} \sigma_{g_{(n)}^{(k,p)}}^{2} &= \tau_{k,p} \, \omega_{k,p} \int\limits_{0}^{\infty}{ r^{k+1} g_{(n)}^{(k,p)}(r) \:dr} \\ &= \tau_{k,p} \, \omega_{k,p} \, \omega_{n-k,p} \int\limits_{0}^{\infty}{ \int\limits_{0}^{\infty}{ x^{k+1} y^{n-k-1} g^{(n,p)}\left(\sqrt[p]{x^{p}+y^{p}}\right) \:dy} \:dx}. \end{array} $$

Applying the l2,p-spherical coordinate transformation \(x = r \frac {\cos (\psi)}{N_{p}(\psi)}\) and \(y = r \frac {\sin (\psi)}{N_{p}(\psi)}\) with Np(ψ)=(|sin(ψ)|p+|cos(ψ)|p)1/p and \(\frac {d(x,y)}{d(r,\psi)} = \frac {r}{N_{p}^{2}(\psi)}\), see Richter (2007), Fubini’s theorem and Remark 5.1 for ν=n+2 and κ=nk, it follows

$$\begin{array}{*{20}l} \sigma_{g_{(n)}^{(k,p)}}^{2} &= \tau_{k,p} \, \omega_{k,p} \, \omega_{n-k,p} \int\limits_{0}^{\infty}{ \int\limits_{0}^{\frac{\pi}{2}}{ r^{n+1} g^{(n,p)}(r) \frac{ \left(\cos(\psi)\right)^{k+1} \left(\sin(\psi)\right)^{n-k-1} } { \left(\left(\sin(\psi)\right)^{p}+\left(\cos(\psi)\right)^{p}\right)^{\frac{n+2}{p}}} \:d\psi} \:dr} \\ &= \sigma_{g^{(n,p)}}^{2}. \end{array} $$

Proof of Lemma 2.5

It follows from \(D = \left (W_{1}\sqrt {S_{1}}\right)\!\left (W_{1}\sqrt {S_{1}}\right)^{\text {\texttt {T}}}\) and Lemma 2.1 that

$$W_{1}^{\text{\texttt{T}}} X \stackrel{d}{=} W_{1}^{\text{\texttt{T}}} \left(\mu + R \cdot \left(W_{1}\sqrt{S_{1}}\right) U_{p}^{(k)} \right) = W_{1}^{\text{\texttt{T}}} \mu + R \cdot \sqrt{S_{1}} U_{p}^{(k)}. $$

Since \(\sqrt {S_{1}}\) has full rank k, \(W_{1}^{\text {\texttt {T}}} X\) is k-dimensional rank-k-continuous p-generalized elliptically contoured distributed with parameters \(W_{1}^{\text {\texttt {T}}} \mu \) and S1 and with dg g(k,p). By definition of this distribution, for \(\phantom {\dot {i}\!}Y \sim \Phi _{g^{(k,p)}}\), it follows \(W_{1}^{\text {\texttt {T}}} X \stackrel {d}{=} W_{1}^{\text {\texttt {T}}} \mu + \sqrt {S_{1}} Y\phantom {\dot {i}\!}\). Thus, \(\phantom {\dot {i}\!}W_{1}^{\text {\texttt {T}}} X\) has pdf

$$\frac{1}{\sqrt{d_{i_{1}} \cdot\ldots\cdot d_{i_{k}}}} \: g^{(k,p)}\!\left(\left| \sqrt{S_{1}}^{-1} \left(z - W_{1}^{\text{\texttt{T}}} \mu \right) \right|_{p} \right), \quad z\in\mathbb{R}^{k}. $$

Since the columns of W1 and W2 together build an orthonormal basis of \(\mathbb {R}^{n}\), we have \(W_{2}^{\text {\texttt {T}}} W_{1} = 0_{(n-k) \times k}\phantom {\dot {i}\!}\) and

$$W_{2}^{\text{\texttt{T}}} X \stackrel{d}{=} W_{2}^{\text{\texttt{T}}} \mu + R \cdot W_{2}^{\text{\texttt{T}}} W_{1} \sqrt{S_{1}} U_{p}^{(k)} = W_{2}^{\text{\texttt{T}}} \mu \qquad \text{a.s.} $$

Thus, the orthogonal projection \(\phantom {\dot {i}\!}Y = \Pi _{U_{W_{2}^{\text {\texttt {T}}}}(\mu)}(X)\) of X into the space \(U_{W_{2}^{\text {\texttt {T}}}}(\mu)\phantom {\dot {i}\!}\) has the pdf

$$\frac{1}{\sqrt{d_{i_{1}} \cdot\ldots\cdot d_{i_{k}}}} \: g^{(k,p)}\!\left(\left| \sqrt{S_{1}}^{-1} W_{1}^{\text{\texttt{T}}} (x - \mu) \right|_{p} \right), \quad x\in\mathbb{R}^{n}, $$

and the orthogonal projection of X into the orthogonal complement of \(U_{W_{2}^{\text {\texttt {T}}}}(\mu)\phantom {\dot {i}\!}\) has probability mass zero. □

5.2 Proof of Theorem 4.1

We start with considering a particular case of Lemma 3.3.

Lemma 5.1

Let XAECn,p (μ,D,g(k,p)) where rk(D)=k≥1. Then, for every (n×n)-permutation matrix M and every \(b \in \mathbb {R}^{n}\),

$$\mathfrak{L}\left(MX + b\right) = AEC_{n,p}\!\left(M\mu + b, M D M^{\text{\texttt{T}}}, g^{(k,p)} \right). $$

Proof

With notations from “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section,

$$MX + b \stackrel{d}{=} \left(M\mu+b\right) + M W_{1} \sqrt{S_{1}} Y \qquad\text{where } Y \sim \Phi_{g^{(k,p)}}. $$

Since \(\phantom {\dot {i}\!}M W_{1} \sqrt {S_{1}} \in \mathbb {R}^{n \times k}\) arises from \(W_{1} \sqrt {S_{1}}\) by interchanging rows, it has rank k. Thus,

$$\begin{array}{*{20}l} \mathfrak{L}\left(MX + b\right) &= AEC_{n,p}\!\left(M\mu + b, \left(M W_{1} \sqrt{S_{1}}\right)\!\left(M W_{1} \sqrt{S_{1}}\right)^{\text{\texttt{T}}}, g^{(k,p)} \right) \\ &= AEC_{n,p}\!\left(M\mu + b, M D M^{\text{\texttt{T}}}, g^{(k,p)} \right). \end{array} $$

Proof of Lemma 3.3

Because of Lemma 5.1, only the case k=0 has to be considered. In this case, XAECn,p (μ,0n×n,g(0,p)), i.e. X follows the Dirac distribution in μ. Thus,

$$ MX + b \stackrel{d}{=} M\mu + b \qquad P-\text{a.s.} $$

and \(\mathfrak {L}(MX+b) = AEC_{n,p}\!\left (M\mu +b,M 0_{n \times n} M^{\text {\texttt {T}}},g^{(0,p)}\right)\) because of 0n×n=M0n×nMT. □

Denoting the cardinality of the set A by |A|, we continue with studying a particular case of Lemma 3.2.

Lemma 5.2

Let be X=(X1,…,Xn)TAECn,p (μ,D,g(k,p)) where \(\mu = \left (\mu _{1},\ldots,\mu _{n}\right)^{\text {\texttt {T}}}\in \mathbb {R}^{n}\) and assume D= diag (d1,…,dn) has nonnegative diagonal entries and rank k≥1. Further, let \(m\in \mathbb {N}\) with mn, J={j1,…,jm}{1,…,n} with j1<…<jm and \(\left |\left \{ \eta \in \{1,\ldots,m\} \colon d_{j_{\eta }} > 0 \right \}\right | \geq 1\). Then, the corresponding m-dimensional subvector \(X_{J} = \left (X_{j_{1}},\dots,X_{j_{m}} \right)^{\text {\texttt {T}}}\) of X satisf ies

$$X_{J} \sim AEC_{m,p}\!\left(\mu_{J},D_{J},g_{(k)}^{(k_{J},p)}\right) $$

where \(\mu _{J} = \left (\mu _{j_{1}},\ldots,\mu _{j_{m}}\right)^{\text {\texttt {T}}}\), \(D_{J} = \text { diag }\!\left (d_{j_{1}},\ldots,d_{j_{m}} \right)\) and kJ=rk(DJ)≥1.

Proof

Starting from the equation XJ=ΓX where

$$\Gamma = \left(\begin{array}{c} {e_{j_{1}}^{(n)}}^{\text{\texttt{T}}} \\ \vdots \\ {e_{j_{m}}^{(n)}}^{\text{\texttt{T}}} \end{array}\right) \in \mathbb{R}^{m \times n} $$

and using notations from “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section, it follows that

$$\Gamma W_{1} \sqrt{S_{1}} = \left(\begin{array}{c} {e_{j_{1}}^{(n)}}^{\text{\texttt{T}}} \\ \vdots \\ {e_{j_{m}}^{(n)}}^{\text{\texttt{T}}} \end{array}\right) \!\left(\begin{array}{ccc} \sqrt{d_{i_{1}}} e_{i_{1}}^{(n)} & \cdots & \sqrt{d_{i_{k}}} e_{i_{k}}^{(n)} \end{array}\right) = \left(\begin{array}{c} f(1) \\ \vdots \\ f(n) \end{array}\right) \in \mathbb{R}^{m \times k} $$

where

$$f(\eta) = \left\{ \begin{array}{ll} \sqrt{d_{i_{l}}} {e_{l}^{(k)}}^{\text{\texttt{T}}} & \quad \text{ if } j_{\eta}=i_{l} \text{ for an } l\in\{1,\ldots,k\} \\ 0_{k}^{\text{\texttt{T}}} & \quad \text{ else } \end{array}\right., \quad \eta=1,\ldots,m. $$

Thus, for \(Y = \left (Y_{1},\ldots,Y_{k}\right) \sim \Phi _{g^{(k,p)}}\), we get

$$\Gamma W_{1} \sqrt{S_{1}} Y = \left(\begin{array}{c} h(1) \\ \vdots \\ h(m) \end{array}\right) \in \mathbb{R}^{m} $$

where

$$h(\eta) = \sum\limits_{l=1}^{k}{ \sqrt{d_{i_{l}}} Y_{l} \delta_{i_{l} j_{\eta}} } = \left\{ \begin{array}{ll} \sqrt{d_{i_{l}}} Y_{l} & \quad \text{ if } j_{\eta} = i_{l} \text{ for an } l\in\{1,\ldots,k\} \\ 0 & \quad \text{ else } \end{array}\right., $$

η=1,…,m. Now, let

$$ K = \left\{ l\in\{1,\ldots,k\} \colon i_{l} = j_{\eta} \text{ for an } \eta\in\{1,\ldots,m\} \right\}. $$
(10)

Then, \(\left |K\right | = \left |\left \{ \eta \in \{1,\ldots,m\} \colon \sigma _{j_{\eta }}^{2} > 0 \right \}\right | \geq 1\) and the matrix \(\Gamma W_{1} \sqrt {S_{1}}\) has k−|K| zero columns. Since each non-zero column is the product of a positive constant with a unit vector in \(\mathbb {R}^{m}\), the vector \(\Gamma W_{1} \sqrt {S_{1}} Y\) consists of |K| different components of Y multiplied by positive constants and of m−|K| zeros. Subsequently, put K={l1,…,l|K|} where \(l_{1} < l_{2} < \ldots < l_{|K|}\phantom {\dot {i}\!}\) is an increasing enumeration of the elements of K and let

$$M = \left(\begin{array}{c} \psi(1) \\ \vdots \\ \psi(m) \end{array}\right) \in \mathbb{R}^{m \times \left|K\right|} $$

be a matrix consisting of the row vectors

$$\psi(\eta) = \left\{ \begin{array}{ll} \sqrt{d_{i_{l_{\kappa}}}} {e_{\kappa}^{(|K|)}}^{\text{\texttt{T}}} & \quad \text{ if}\ j_{\eta} = i_{l_{\kappa}} \text{ for a } \kappa\in\{1,\ldots,|K|\} \\ 0_{|K|}^{\text{\texttt{T}}} & \quad \text{ else } \end{array}\right., \quad \eta=1,\ldots,m. $$

Then, for \(B\in \mathfrak {B}^{m}\), \(Y \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\) and \(Z \sim \Phi _{g_{(k)}^{(|K|,p)}}\phantom {\dot {i}\!}\), it follows that

$$\begin{array}{*{20}l} P\!\left(\Gamma W_{1} \sqrt{S_{1}} Y \in B \right) &= P\left(\left(\begin{array}{c} h(1) \\ \vdots \\ h(m) \end{array}\right) \in B \;, \; Y_{l}\in\mathbb{R} \text{ for all} l\in\{1,\ldots,k\}\backslash K \right) \\ &= P\!\left(MZ \in B \right) \end{array} $$

and, because of (1) and rk(M)=|K|,

$$\begin{array}{*{20}l} X_{J} = \Gamma X \stackrel{d}{=} \mu_{J} + \Gamma W_{1} \sqrt{S_{1}} Y &\stackrel{d}{=} \mu_{J} + MZ \\ &\sim EC_{m,p}\!\left(\mu_{J},MM^{\text{\texttt{T}}},g_{(k)}^{(|K|,p)}\right). \end{array} $$

Note that M can be extended to \(\Gamma W_{1} \sqrt {S_{1}}\) by adding zero columns without changing the rank. Moreover, \(MM^{\text {\texttt {T}}} = \left (\Gamma W_{1} \sqrt {S_{1}}\right)\!\left (\Gamma W_{1} \sqrt {S_{1}}\right)^{\text {\texttt {T}}} = \Gamma D \Gamma = D_{J}\) and |K|=rk(M)=rk(MMT)=rk(DJ)=kJ. Summarizing all, we have

$$\mathfrak{L}(X_{J}) = AEC_{m,p}\!\left(\mu_{J},D_{J},g_{(k)}^{(k_{J},p)}\right). $$

Proof of Lemma 3.2

If k=0, XAECn,p (μ,0n×n,g(0,p)) and J={j1,…,jm}{1,…,n} with j1<…<jm. In this case, XJ=μJP-a.s. and

$$X_{J} \sim EC_{m,p}\!\left(\mu_{J},0_{m \times m},g_{(0)}^{(0,p)}\right) $$

because the symbols g(0,p) and \(g_{(k)}^{(0,p)}\) can be switched for a \(k\in \mathbb {N}\cup \{0\}\). Now, let XAECn,p (μ,D,g(k,p)) where D= diag (d1,…,dn) has nonnegative diagonal elements and positive rank k, and let J={j1,…,jm}{1,…,n} be an index set such that j1<…<jm and \(\left |\left \{ \eta \in \{1,\ldots,m\} \colon \sqrt {d_{j_{\eta }}} > 0 \right \}\right | \geq 1\). Then, Lemma 5.2 yields the assertion. Finally, let XAECn,p (μ,D,g(k,p)) where D= diag (d1,…,dn) has nonnegative diagonal elements and positive rank k but, now, where J={j1,…,jm}{1,…,n} is an index set such that j1<…<jm and \(\left |\left \{ \eta \in \{1,\ldots,m\} \colon \sqrt {d_{j_{\eta }}} > 0 \right \}\right | = 0\). Using the notation from the proof of Lemma 5.2, the set K defined in (10) has cardinality \(\left |K\right | = \left |\left \{ \eta \in \{1,\ldots,m\} \colon \sqrt {d_{j_{\eta }}} > 0 \right \}\right | = 0\). Because of this, \(\Gamma W_{1} \sqrt {S_{1}}\) is equal to the (m×k) zero matrix and the distribution of \(\Gamma W_{1} \sqrt {S_{1}} Y\) for \(\phantom {\dot {i}\!}Y \sim \Phi _{g^{(k,p)}}\) is concentrated in 0m. Since \(D_{J} = \text { diag }\!\left (d_{j_{1}},\ldots,d_{j_{m}} \right) = 0_{m \times m}\), kJ=rk(DJ)=0 and \(X_{J} = \Gamma X \stackrel {d}{=} \mu _{J} + \Gamma W_{1} \sqrt {S_{1}} Y\) for \(Y \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\), it follows

$$X_{J} = \mu_{J} \qquad P-\text{a.s.}, $$

i.e. \(X_{J} \sim AEC_{m,p}\!\left (\mu _{J},0_{m \times m},g_{(k)}^{(0,p)}\right)\). □

Proof of Lemma 3.1

Starting from (5) and using the transformation \(\tilde {y}=\left |x\right |_{p}^{p}+y^{p}\), for \(x\in \mathbb {R}^{k}\), we get

$$\begin{array}{*{20}l} g^{(k,p)}\!\left(\left|x\right|_{p}\right) & = \int\limits_{-\infty}^{\infty}{ g^{(k+1,p)}\!\left(\sqrt[p]{ \left|x\right|_{p}^{p} + \left|y\right|^{p} }\right) \:dy} \\ &= 2 \int\limits_{0}^{\infty}{ g^{(k+1,p)}\!\left(\sqrt[p]{ \left|x\right|_{p}^{p} + y^{p} }\right) \:dy} \\ &= \frac{2}{p} \int\limits_{\left|x\right|_{p}^{p}}^{\infty}{ \left(\tilde{y}-\left|x\right|_{p}^{p} \right)^{\frac{1}{p}-1} g^{(k+1,p)}\!\left(\sqrt[p]{ \tilde{y} }\right) \:d\tilde{y}}. \end{array} $$

Because of ω1,p=2,

$$g^{(k,p)}\!\left(\left|x\right|_{p}\right) = g_{(k+1)}^{(k,p)}\!\left(\left|x\right|_{p}\right), \quad x\in\mathbb{R}^{k}. $$

Proof of Theorem 4.1

For \(n\in \mathbb {N}\) and arbitrary elements t1,…,tn,tn+1 of I, let \(\mu ^{(n+1)} = \left (m(t_{1}),\ldots,m(t_{n}),m(t_{n+1}) \right)^{\text {\texttt {T}}} \in \mathbb {R}^{n+1}\) and assume D(n+1)= diag (S(t1),…,S(tn),S(tn+1)) to have rank k. Further, let \(Q_{\left \{ t_{1},\ldots,t_{n},t_{n+1} \right \}}(\cdot) = AEC_{n+1,p}\!\left (\cdot \bigm | \mu ^{(n+1)},D^{(n+1)},g^{(k,p)}\right) \in \mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) be the probability measure induced by a random vector following the (n+1)-dimensional kapec distribution with parameters μ(n+1) and D(n+1) and dg g(k,p)g(p) if k>0 and symbol g(0,p) if k=0, respectively. By Lemma 3.2, it follows

$$\begin{array}{*{20}l} Q_{\left\{ t_{1},\ldots,t_{n},t_{n+1} \right\}}(A \times \mathbb{R}) &= AEC_{n+1,p}\!\left(A \times \mathbb{R} \bigm| \mu^{(n+1)},D^{(n+1)},g^{(k,p)}\right) \\ &= AEC_{n,p}\!\left(A \bigm| \mu^{(n)},D^{(n)},g_{(k)}^{(\kappa,p)}\right), \quad A\in\mathfrak{B}^{n}, \end{array} $$

where μ(n)=(m(t1),…,m(tn))T and D(n)= diag (S(t1),…,S(tn)) with κ=rk(D(n)){k−1,k}. Furthermore, using Lemma 3.1 if κ>0 and recalling the exchangeability of symbols \(g_{(k)}^{(0,p)}\) and g(0,p) (to maintain the notation as in the proof of Lemma 3.2) if κ=0, we have

$$Q_{\left\{ t_{1},\ldots,t_{n},t_{n+1} \right\}}(A \times \mathbb{R}) = AEC_{n,p}\!\left(A \bigm| \mu^{(n)},D^{(n)},g^{(\kappa,p)}\right) = Q_{\left\{ t_{1},\ldots,t_{n} \right\}}(A). $$

Therefore, the marginal probability measure \(Q_{\left \{ t_{1},\ldots,t_{n} \right \}}\) of \(Q_{\left \{ t_{1},\ldots,t_{n+1} \right \}}\) corresponds to the element AECn,p (μ(n),D(n),g(κ,p)) of \(\mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) and, thus, the Kolmogorov consistency condition (6) is satisfied. Now, let π be a permutation of {1,…,n} and M the corresponding permutation matrix. Additionally, let \(Q_{\left \{ t_{1},\ldots,t_{n} \right \}}(\cdot) = AEC_{n,p}\!\left (\cdot \bigm | \mu,D,g^{(\kappa,p)}\right) \in \mathcal {AEC}_{g^{(p)}}^{I}(m,S)\) be the probability measure induced by a random vector X with XAECn,p (μ,D,g(κ,p)) where μ=μ(n)=(m(t1),…,m(tn))T and D=D(n)= diag (S(t1),…,S(tn)) with κ=rk(D). Then, \(Q_{\left \{ t_{\pi (1)},\ldots,t_{\pi (n)} \right \}}\) is induced by MX and, according to Lemma 3.3,

$$Q_{\left\{ t_{\pi(1)},\ldots,t_{\pi(n)} \right\}}(\cdot) = AEC_{n,p}\!\left(\cdot \bigm| M\mu,MDM^{\text{\texttt{T}}},g^{(\kappa,p)}\right). $$

If κ=0, then D=MDMT=0n×n. In this case, \(Q_{\left \{ t_{1},\ldots,t_{n} \right \}}\) and \(Q_{\left \{ t_{\pi (1)},\ldots,t_{\pi (n)} \right \}}\) are Dirac measures in μ and Mμ, respectively, and, for \(A\in \mathfrak {B}^{n}\),

Thus, the Kolmogorov consistency condition (7) is satisfied if κ=0. Now, let κ>0. Using the notations of matrices S1, W1 and W2 from “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section, \(\left (W_{1}\sqrt {S_{1}}\right)\!\left (W_{1}\sqrt {S_{1}}\right)^{\text {\texttt {T}}}\) is a decomposition of D with \(W_{1}\sqrt {S_{1}} \in \mathbb {R}^{n \times \kappa }\) and \(\mathrm {rk\left (W_{1}\sqrt {S_{1}}\right)} = \kappa \) and the columns of W2 are a basis of the kernel of D. Consequently, on the one hand, \(\left (M W_{1} \sqrt {S_{1}}\right)\!\left (M W_{1} \sqrt {S_{1}}\right)^{\text {\texttt {T}}}\) is a corresponding decomposition of MDMT with \(M W_{1}\sqrt {S_{1}} \in \mathbb {R}^{n \times \kappa }\) and \(\mathrm {rk\left (M W_{1}\sqrt {S_{1}}\right)} = \kappa \) since left multiplication of \(W_{1}\sqrt {S_{1}}\) by permutation matrix M only interchanges columns and leaves the rank unchanged. On the other hand, the columns of MW2 build a basis of the kernel of MDMT,

$$U_{\left(MW_{2}\right)^{\text{\texttt{T}}}}(M\mu) = \left\{ M y \in\mathbb{R}^{n} \colon W_{2}^{\text{\texttt{T}}} y = W_{2}^{\text{\texttt{T}}} \mu \right\} = M \cdot U_{W_{2}^{\text{\texttt{T}}}}(\mu), $$

and

$$\lambda_{U_{\left(MW_{2}\right)^{\text{\texttt{T}}}}(M\mu)}^{(\kappa)}(\cdot) = \lambda_{M \cdot U_{W_{2}^{\text{\texttt{T}}}}(\mu)}^{(\kappa)}(\cdot) = \lambda_{U_{W_{2}^{\text{\texttt{T}}}}(\mu)}^{(\kappa)}\!\left(f_{M^{\text{\texttt{T}}}}(\cdot) \right) $$

where \(f_{M^{\text {\texttt {T}}}}\phantom {\dot {i}\!}\) is defined by \(\phantom {\dot {i}\!}f_{M^{\text {\texttt {T}}}}(x) = M^{\text {\texttt {T}}} x\), \(x\in \mathbb {R}^{n}\). Finally, for \(A\in \mathfrak {B}^{n}\), Eq. 4 resulting from the pdf-like representation of an n-dimensional apec distribution together with the transformation \(y = f_{M^{\text {\texttt {T}}}}(x)\phantom {\dot {i}\!}\) having the Jacobian |det(M)|=1 yield

$$\begin{array}{*{20}l} & Q_{\left\{ t_{\pi(1)},\ldots,t_{\pi(n)} \right\}}(A_{\pi}) \\ &= AEC_{n,p}\!\left(MA \bigm| M\mu,MDM^{\text{\texttt{T}}},g^{(\kappa,p)}\right) \\ &= \frac{1}{\det\!\left(\sqrt{S_{1}}\right)} \int\limits_{MA }{ \! g^{(\kappa,p)}\!\left(\left| \sqrt{S_{1}}^{-1} (MW_{1})^{\text{\texttt{T}}} (x - M\mu) \right|_{p} \right) \: \lambda_{U_{\left(MW_{2}\right)^{\text{\texttt{T}}}}(M\mu)}^{(\kappa)}(dx)} \\ &= \frac{1}{\det\!\left(\sqrt{S_{1}}\right)} \int\limits_{f_{M^{\text{\texttt{T}}}}(A) }{ \!\!\!\! g^{(\kappa,p)}\!\left(\left| \sqrt{S_{1}}^{-1} W_{1}^{\text{\texttt{T}}} \left(f_{M^{\text{\texttt{T}}}}(x) - \mu \right) \right|_{p} \right) \: \lambda_{U_{W_{2}^{\text{\texttt{T}}}}(\mu)}^{(\kappa)}\!\left(f_{M^{\text{\texttt{T}}}}(dx) \right)} \\ &= \frac{1}{\det\!\left(\sqrt{S_{1}}\right)} \int\limits_{A }{ g^{(\kappa,p)}\!\left(\left| \sqrt{S_{1}}^{-1} W_{1}^{\text{\texttt{T}}} (y - \mu) \right|_{p} \right) \: \lambda_{U_{W_{2}^{\text{\texttt{T}}}}(\mu)}^{(\kappa)}(dy)} \\ &= Q_{\left\{ t_{1},\ldots,t_{n} \right\}}(A). \end{array} $$

Thus, the Kolmogorov consistency condition (7) is satisfied in case κ>0, too. □

5.3 Proofs regarding to “Scale mixtures of apec Gaussian distributions” section

Proof of Lemma 4.1

For a positive random variable VG, because of (1) and (8), we have \(X \stackrel {d}{=} \mu + V^{-\frac {1}{p}} \cdot Z\) where ZANn,p(0n,Σ). It follows \(X \stackrel {d}{=} \mu + V^{-\frac {1}{p}} \cdot W_{1} \sqrt {S_{1}} \tilde {Z}\) where \(\tilde {Z} \sim N_{k,p}\) and \(X \stackrel {d}{=} \mu + W_{1} \sqrt {S_{1}} V^{-\frac {1}{p}} \cdot \tilde {Z}\) where \(\tilde {Z} \sim N_{k,p}\). Thus, \(X \stackrel {d}{=} \mu + W_{1} \sqrt {S_{1}} \tilde {X}\) where \(\tilde {X} \sim SMN_{k,p}(G)\). □

Proof of Corollary 4.1

In the case k≥1, the assertion follows from Lemma 4.1, Eq. 1 and the identity \(SMN_{k,p}(G) = \Phi _{g_{SMN;G}^{(k,p)}}\) from Arellano-Valle and Richter (2012). In the case k=0, Z=0n a.s. in (8). Therefore, XSMANn,p(μ,0n×n,G), that is X has Dirac distribution in μ. Thus, \(X \sim AEC_{n,p}\!\left (\mu,0_{n \times n},g_{SMN;G}^{(0,p)}\right)\), where \(g_{SMN;G}^{(0,p)}\) is just a symbol to maintain notations. □

Proof of Corollary 4.2

By Corollary 4.1, the assertion follows from Lemma 2.2 with the specific dg \(g_{SMN;G}^{(k,p)}\). Particularly, for m{1,2}, \(I_{k+m}\!\left (g_{SMN;G}^{(k,p)}\right)\) is finite if and only if \(\mathbb {E}\left (V^{-\frac {m}{p}}\right)\) is finite. To see this, consider

$$\begin{array}{*{20}l} I_{k+m}\!\left(g_{SMN;G}^{(k,p)}\right) &= C_{p}^{k} \int\limits_{0}^{\infty}{ r^{k+m-1} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:dG(v)} \:dr} \\ &= C_{p}^{k} \, p^{\frac{k+m}{p}-1} \Gamma\!\left(\frac{k+m}{p}\right) \int\limits_{0}^{\infty}{ v^{-\frac{m}{p}} \:dG(v)}. \end{array} $$

Here, we used notation \(C_{p} = \frac { p^{1-\frac {1}{p}} }{ 2\Gamma \!\left (\frac {1}{p}\right) }\), two times Fubini’s theorem and changed variables \(s = \frac {r^{p}}{p} v\) with \(\frac {dr}{ds} = p^{\frac {1}{p}-1} v^{-\frac {1}{p}} s^{\frac {1}{p}-1}\). Finally, by Lemma 2.2, the specific univariate variance component is

$$g_{SMN;G}^{(k,p)} = p^{\frac{2}{p}} \frac{ \Gamma\!\left(\frac{3}{p}\right) }{ \Gamma\!\left(\frac{1}{p}\right)} \mathbb{E}\left(V^{-\frac{2}{p}}\right). $$

Proof

Proof of Lemma 4.2of Lemma 4.2 Let ZANn,p(0n,D) and assume Z to be independent of V. Making use of Eq. 8 and exploiting the independence of Z and V, for all \(B\in \mathfrak {B}^{n}\) and v>0,

$$P\!\left(X \in B \bigm| V=v \right) = P\!\left(\left(\mu + V^{-\frac{1}{p}} Z\right) \in B \bigm| V=v \right) = P\!\left(\left(\mu + v^{-\frac{1}{p}} W_{1} \sqrt{S_{1}} \tilde{Z}\right) \in B \right) $$

where \(\tilde {Z} \sim N_{k,p}\). Because of \(\left (v^{-\frac {1}{p}} W_{1} \sqrt {S_{1}}\right)\!\left (v^{-\frac {1}{p}} W_{1} \sqrt {S_{1}}\right)^{\text {\texttt {T}}} = v^{-\frac {2}{p}} D\) with \(v^{-\frac {1}{p}} W_{1} \sqrt {S_{1}} \in \mathbb {R}^{n \times k}\) and \(\mathrm {rk\left (v^{-\frac {1}{p}} W_{1} \sqrt {S_{1}}\right)} = k\), according to (1) with dg \(g_{PE}^{(k,p)}\), the assertion follows from

$$\mathfrak{L}\left(\mu + v^{-\frac{1}{p}} W_{1} \sqrt{S_{1}} \tilde{Z}\right) = AN_{n,p}\left(\mu, v^{-\frac{2}{p}}D \right), \quad v>0. $$

Before proving the general statement of Theorem 4.1, we prove the following particular one.

Lemma 5.3

Let \(X \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\). Then, XSMNk,p(G) for the cdf G of a suitable positive random variable if and only if the function h def ined by \(h(y) = g^{(k,p)}\!\left (\sqrt [p]{y}\right)\), y[0,), is completely monotone.

Proof

Throughout this proof, let \(X \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\). If XSMNk,p(G) for the cdf G of a suitable positive random variable, according to Corollary 4.1, \(g^{(k,p)} = g_{SMN;G}^{(k,p)}\) and

$$h(y) = g_{SMN;G}^{(k,p)}\!\left(y^{\frac{1}{p}}\right) = C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{y}{p}v} \:dG(v)}, \quad y \geq 0, $$

where \(C_{p} = \frac { p^{1-\frac {1}{p}} }{ 2\Gamma \!\left (\frac {1}{p}\right) }\). Because of

$$\frac{d^{m} h}{dy^{m}}(y) = \left(-1\right)^{m} \frac{C_{p}^{k}}{p^{m}} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}+m} e^{-\frac{y}{p}v} \:dG(v)}, \quad y>0 $$

for all \(m\in \mathbb {N}\cup \{0\}\), h is completely monotone in [0,). Now, let \(h = g^{(k,p)}\!\left (\sqrt [p]{\cdot }\right)\) be completely monotone on [0,). According to Hausdorff-Bernstein-Widder theorem, see Widder (1946), h is representable as the Laplace-Stieltjes transform of a nondecreasing function α, i.e.

$$h(y) = \int\limits_{0}^{\infty}{ e^{-yt} \:d\alpha(t)}, \quad 0< y<\infty, $$

and the integral converges for all 0<y<. Additionally, denoting,

$$\beta(t) = \int\limits_{1}^{t}{ C_{p}^{-k} v^{-\frac{k}{p}} \:d\alpha\!\left(\frac{1}{p}v\right)}, \quad t>0, $$

Stieltjes integral properties yield

$$h(y) = \int\limits_{0}^{\infty}{ e^{-y\left(\frac{1}{p}v\right)} \:d\alpha\!\left(\frac{1}{p}v\right) } = C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{1}{p}yv} \:d\beta(v)}, \quad y>0. $$

Thus,

$$g^{(k,p)}(r) = h\!\left(r^{p}\right) = C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:d\beta(v)}, \quad r>0. $$

Consequently, it remains to show that G defined by \(G(v) = \beta (v) - \lim \limits _{t \searrow 0}{\beta (t)}\), v>0, is the cdf of a positive random variable. Note that G is nondecreasing since α has this property. Hence,

$$\begin{array}{*{20}l} G(v_{2})-G(v_{1}) = \beta(v_{2})-\beta(v_{1}) = \int\limits_{v_{1}}^{v_{2}}{ C_{p}^{-k} v^{-\frac{k}{p}} \:d\alpha\!\left(\frac{1}{p}v\right) } \geq 0, \quad 0 < v_{1} \leq v_{2}. \end{array} $$

It remains to show that \(1 = \lim \limits _{v\to \infty }{ G(v)} - \lim \limits _{t\searrow 0}{ G(t) }\). To this end, let \(\tilde {g}^{(k,p)}(z,r) = C_{p}^{k} \int \limits _{z^{-1}}^{z}{ v^{\frac {k}{p}} e^{-\frac {r^{p}}{p}v} \:d\beta (v) }\), 1<z<, denote a left and right truncated version of g(k,p). Using Fubini’s theorem, change of variables \(s=r\sqrt [p]{v}\) with \(\frac {dr}{ds}=v^{-\frac {1}{p}}\) and the equality \(\omega _{k,p} \, I_{k}\!\left (g_{PE}^{(k,p)}\right) = 1\), we have

$$\begin{array}{*{20}l} \int\limits_{0}^{\infty}{ r^{k-1} \tilde{g}^{(k,p)}(z,r) \:dr } &= \int\limits_{z^{-1}}^{z}{ v^{\frac{k}{p}} \int\limits_{0}^{\infty}{ C_{p}^{k} r^{k-1} e^{-\frac{r^{p}}{p}v} \:dr} \:d\beta(v)} \\ &= \int\limits_{z^{-1}}^{z}{ I_{k}\!\left(g_{PE}^{(k,p)}\right) \:d\beta(v)} \\ &= \frac{1}{\omega_{k,p}} \left(\beta(z) - \beta\left(z^{-1}\right) \right), \quad z>1. \end{array} $$

Because \(\tilde {g}^{(k,p)}(z,r)\) is a nonnegative function and g(k,p)(r)=h(rp), it follows \(0 \leq r^{k-1} \tilde {g}^{(k,p)}(z,r) \leq r^{k-1} g^{(k,p)}(r)\) for all z>1 and r>0. Furthermore, because of its structure as well as its nonnegativity, for all r>0, the function \(r^{k-1} \tilde {g}^{(k,p)}(z,r)\) is monotonically increasing in variable z and converges to rk−1g(k,p)(r) as z. Thus, the monotone convergence theorem of Beppo Levi yields the desired

$$\begin{array}{*{20}l} \lim\limits_{v\to\infty}{ G(v)} - \lim\limits_{t\searrow0}{ G(t) } &= \lim\limits_{z\to\infty}{\left(G(z) - G\left(z^{-1}\right) \right)} \\ &= \lim\limits_{z\to\infty}{ \beta(z) - \beta\left(z^{-1}\right)} \\ &= \lim\limits_{z\to\infty}{ \omega_{k,p} \int\limits_{0}^{\infty}{ r^{k-1} \tilde{g}^{(k,p)}(z,r) \:dr} } \\ &= \omega_{k,p} \, I_{k}\!\left(g_{PE}^{(k,p)}\right) \end{array} $$

Therefore, G defined by \(G(v) = \beta (v) - \lim \limits _{t \searrow 0}{\beta (t)}\), v>0, is the cdf of a positive random variable. Finally, because of

$$\begin{array}{*{20}l} g^{(k,p)}(r) = h\!\left(r^{p}\right) &= C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:d\beta(v)} \\ &= C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:d\left(\beta(v) - \lim\limits_{t \searrow 0}{\beta(t)} \right)} \\ &= C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:dG(v)}, \quad r>0, \end{array} $$

we have \(g^{(k,p)} = g_{SMN;G}^{(k,p)}\) a.e. in [0,) and XSMNk,p(G). □

Before proving the general statement of Corollary 4.3, we prove the following particular one.

Corollary 5.1

Let \(X \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\) and assume that \(g^{(k,p)}\!\left (\sqrt [p]{\cdot }\right)\) is completely monotone in (0,)and has inverse Laplace-Stieltjes transform α, \(g^{(k,p)}\!\left (\sqrt [p]{y}\right) = \int \limits _{0}^{\infty }{ e^{-yt} \:d\alpha (t) }\), y>0. Then, XSMNk,p(G) and the mixture cdf G satisf ies the representation

$$\alpha(t) = \frac{p}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} \int\limits_{1}^{t}{ z^{\frac{k}{p}} \:dG(pz)}, \quad t>0. $$

Moreover, the probability distribution corresponding to G is regular and has pdf fG if and only if α is absolutely continuous and has pdf fα where both pdfs are connected by

Proof

Proof of Corollary 5.1of Corollary 5.1 According to the second part of the proof of Lemma 5.3, on the one hand, there exists a nondecreasing function α satisfying \(g^{(k,p)}(\sqrt [p]{y}) = \int \limits _{0}^{\infty }{ e^{-yt} \:d\alpha (t) }\), y>0. Since XSMNk,p(G) for a suitable mixture cdf G, on the other hand, we have \(g^{(k,p)}(\sqrt [p]{y}) = g_{SMN;G}^{(k,p)}(\sqrt [p]{y}) = C_{p}^{k} \int \limits _{0}^{\infty }{ v^{\frac {k}{p}} e^{-\frac {1}{p}yv} \:dG(v) }\), y>0. Then, changing variables \(z=\frac {1}{p}v\),

$$\int\limits_{0}^{\infty}{ e^{-yt} \:d\alpha(t)} = C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{1}{p}yv} \:dG(v) } = \frac{p}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} \int\limits_{1}^{t}{ z^{\frac{k}{p}} e^{-zv} \:dG(pz) } $$

and using properties of Stieltjes integrals, it turns out that

$$\alpha(t) = \frac{p}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} \int\limits_{1}^{t}{ z^{\frac{k}{p}} \:dG(pz)}, \quad t>0. $$

Hence, regularity properties of probability distributions regarding to G and α are equivalent. Moreover, since fG is the pdf of a positive random variable and there holds

$$f_{\alpha}(t) = \frac{p}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} t^{\frac{k}{p}} \frac{dG(pt)}{dt} = \frac{p^{2}}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} t^{\frac{k}{p}} \cdot f_{G}(pt), $$

t>0, according to the above equation involving fG, it follows fG(s)=0 for all s≤0 and

$$f_{G}(s) = \omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right) p^{-2} \left(\frac{s}{p}\right)^{-\frac{k}{p}} f_{\alpha}\!\left(\frac{s}{p}\right), \quad s>0. $$

Proof

Proof of Theorem 4.1of Theorem 4.1 Let XSMANn,p(μ,D,G) for the cdf G of a positive random variable. Then, \(g^{(k,p)} = g_{SMN;G}^{(k,p)}\) according to Corollary 4.1 and \(g_{SMN;G}^{(k,p)}\!\left (\sqrt [p]{\cdot }\right)\) is completely monotone in [0,) according to Lemma 5.3. Vice versa, let XAECn,p (μ,D,g(k,p)) with k=rk(D) and assume \(h(\cdot) = g^{(k,p)}\!\left (\sqrt [p]{\cdot }\right)\) to be completely monotone in [0,). Then, according to Lemma 5.3, g(k,p) is the dg of a distribution from

$$\left\{ SMN_{k,p}(G) \colon G \text{ is the cdf of a positive random variable } \right\}, $$

i.e. \(\Phi _{g^{(k,p)}} = SMN_{k,p}(G)\phantom {\dot {i}\!}\) for a suitable cdf G of a positive random variable. Thus, \(X \stackrel {d}{=} \mu + W_{1}\sqrt {S_{1}} \tilde {X}\phantom {\dot {i}\!}\) where \(\tilde {X} \sim SMN_{k,p}(G)\) because of (1) and, finally, XSMANn,p(μ,Σ,G) because of Lemma 4.1. □

Proof

Proof of Corollary 4.3of Corollary 4.3 According to (1), for XAECn,p (μ,D,g(k,p)) with rk(D)=k, we have

$$X \stackrel{d}{=} \mu + W_{1}\sqrt{S_{1}} \tilde{X} \qquad\text{ where } \tilde{X} \sim \Phi_{g^{(k,p)}}. $$

Because \(g^{(k,p)}\!\left (\sqrt [p]{\cdot }\right)\) is completely monotone in (0,), Corollary 5.1 yields \(\tilde {X} \sim SMN_{k,p}(G)\) as well as

$$\alpha(t) = \frac{p}{\omega_{k,p} \, \Gamma\!\left(\frac{k}{p}\right)} \int\limits_{1}^{t}{ z^{\frac{k}{p}} \:dG(pz)}, \quad t>0, $$

where α is the inverse Laplace-Stieltjes transform of \(g^{(k,p)}\!\left (\sqrt [p]{\cdot }\right)\). The relationship between the pdfs fG and fα follows in analogy to the second part of the proof of Corollary 5.1. □

5.4 Proofs regarding to “Scale mixed p-generalized Gaussian processes having axis-aligned fdds” section

Proof

Proof of Lemma 4.3of Lemma 4.3 Using Fubini’s theorem and changing variables \(y = v^{-\frac {1}{p}} z\) with \(\frac {dy}{dz} = v^{-\frac {1}{p}}\), for all \(k\in \mathbb {N}\) and r≥0, there holds

$$\begin{array}{*{20}l} \int\limits_{-\infty}^{\infty}{ g_{SMN;G}^{(k+1,p)}\!\left(\sqrt[p]{ r^{p} + \left|y\right|^{p}} \right) \:dy } &= 2 \int\limits_{0}^{\infty}{ g_{SMN;G}^{(k+1,p)}\!\left(\sqrt[p]{ r^{p} + y^{p}} \right) \:dy} \\ &= \left(C_{p}^{k} \int\limits_{0}^{\infty}{ v^{\frac{k}{p}} e^{-\frac{r^{p}}{p}v} \:dG(v)} \right) \frac{p^{1-\frac{1}{p}}}{\Gamma\!\left(\frac{1}{p}\right)} \int\limits_{0}^{\infty}{ e^{-\frac{z^{p}}{p}} \:dz}. \end{array} $$

Since G is independent of k, see (9), the first factor on the right hand side of the latter equation is equal to the value of the dg \(g_{SMN;G}^{(k,p)}\) evaluated at r. Furthermore, the corresponding second factor equals 1. Thus, the assertion follows with

$$\begin{array}{*{20}l} \int\limits_{-\infty}^{\infty}{ g_{SMN;G}^{(k+1,p)}\!\left(\left|\left(x_{1},\ldots,x_{k},x_{k+1}\right)^{\text{\texttt{T}}}\right|_{p}\right) \:dx_{k+1} } &= \int\limits_{-\infty}^{\infty}{ g_{SMN;G}^{(k+1,p)}\!\left(\sqrt[p]{ r^{p} + \left|y\right|^{p}} \right) \:dy} \\ &= g_{SMN;G}^{(k,p)}(r) \\ &= g_{SMN;G}^{(k,p)}\!\left(\left|\left(x_{1},\ldots,x_{k}\right)^{\text{\texttt{T}}}\right|_{p}\right) \end{array} $$

for all \(k\in \mathbb {N}\) and \(\left (x_{1},\ldots,x_{k}\right)^{\text {\texttt {T}}} \in \mathbb {R}^{k}\) where r=|(x1,…,xk)T|p and y=xk+1. □

Proof

Proof of Theorem 4.2of Theorem 4.2 Let \(n\in \mathbb {N}\) and J={t1,…,tn} an arbitrary subset of I having n elements. Then, \(J\in \mathcal {H}(I)\), and \(AEC_{n,p}\!\left (\mu,D,g_{SMN;G}^{(k,p)}\right)\) with μ=(m(t1),…,m(tn))T and D= diag (S(t1),…,S(tn)) where k=rk(D) is the fdd of the random process X corresponding to \(X_{J} = \left (X_{t_{1}},\ldots,X_{t_{n}}\right)^{\text {\texttt {T}}}\). Moreover, ANn,p (0n,D) and \(\mathfrak {L}\left (\mu ^{(n)}+V^{-\frac {1}{p}}Z_{J}\right)\) are the fdds of Z regarding to \(Z_{J} = \left (Z_{t_{1}},\ldots,Z_{t_{n}}\right)^{\text {\texttt {T}}}\) and of Y regarding to \(Y_{J} = \left (Y_{t_{1}},\ldots,Y_{t_{n}}\right)^{\text {\texttt {T}}}\), respectively. By (8) and Corollary 4.1,

$$\mathfrak{L}\left(\mu^{(n)}+V^{-\frac{1}{p}}Z_{J}\right) = SMAN_{n,p}(\mu,D,G) = AEC_{n,p}\!\left(\mu,D,g_{SMN;G}^{(k,p)}\right) $$

for all \(n\in \mathbb {N}\) and every set \(J = \left \{t_{1},\ldots,t_{n}\right \} \in \mathcal {H}(I)\) with |J|=n. Thus, the random processes X and Y are equivalent meaning that they have one and the same family of fdds. □

Before we prove Theorem 4.3, we consider the following special case of it. To this end, notice that the sequence \(\left (\sigma _{g_{PE}^{(k,p)}}^{2}\right)_{k\in \mathbb {N}}\) of all univariate variance components of multivariate p-generalized spherical Gaussian distributions equals the sequence \(\left (\sigma _{g_{SMN;G}^{(p)}}^{2}\right)_{k\in \mathbb {N}}\) with \(G=\mathbbm{1}_{(1,\infty)}\). Thus, according to the paragraph before Theorem 4.3, it is constant. Subsequently, an arbitrary element of it is denoted by \(\sigma _{g_{PE}^{(p)}}^{2}\phantom {\dot {i}\!}\) and satisfies \(\sigma _{g_{PE}^{(p)}}^{2} = \sigma _{g_{SMN;\mathbbm{1}_{(1,\infty)}}^{(p)}}^{2} = p^{\frac {2}{p}} \frac { \Gamma \!\left (\frac {3}{p}\right) }{ \Gamma \!\left (\frac {1}{p}\right) }\).

Lemma 5.4

Let Z={Zt}tIAGPp(m,S). Then, Z is a second order random process, its expectation function is equal to m, and its covariance function \(\Gamma \colon I \times I \to \mathbb {R}\) is given by

$$\Gamma(s,t) = \left\{ \begin{array}{ll} \sigma_{g_{PE}^{(p)}}^{2} \cdot S(t) & \text{ if } s=t \\ 0 & \text{ else} \end{array}\right.. $$

The proof of this lemma follows immediately from Corollaries 4.1 and 4.2 and is therefore omitted, here.

Proof

Proof of Theorem 4.3of Theorem 4.3 Let Z={Zt}tIAGPp(0I,S) be independent of VG. Then, according to Theorem 4.2, X is equivalent to the random process \(Y = \left \{ m(t) + V^{-\frac {1}{p}} Z_{t} \right \}_{t \in I}\) and \(V^{-\frac {1}{p}}\) and Zt as well as \(V^{-\frac {2}{p}}\) and ZsZt are independent for all indices s,tI. Because of

$$\mathbb{E}(X_{t}) = \mathbb{E}\left(m(t) + V^{-\frac{1}{p}} Z_{t}\right) = m(t) + \mathbb{E}\left(V^{-\frac{1}{p}}\right) \mathbb{E}(Z_{t}) $$

and \(\mathbb {E}(Z_{t}) = 0\) for all tI according to Lemma 5.4, the value of expectation of Xt exists and is equal to m(t) if \(\mathbb {E}(V^{-\frac {1}{p}})\) is finite. Furthermore, for all tI, the independence \(V^{-\frac {2}{p}}\) and \(Z_{t} Z_{t} = Z_{t}^{2}\) yields

$$\mathbb{E}\left(X_{t}^{2}\right) = \left(m(t)\right)^{2} + 2m(t) \mathbb{E}\left(V^{-\frac{1}{p}}\right) \mathbb{E}(Z_{t}) + \mathbb{E}\left(V^{-\frac{2}{p}}\right) \mathbb{E}\left(Z_{t}^{2}\right). $$

As Z is a second order random process, X is a second order random process, too, if \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is finite. In this case, for all s,tI, using the independence of \(V^{-\frac {2}{p}}\) and ZsZt as well as the covariance function of a centered p-generalized Gaussian process Z having axis-aligned fdds with scale function S from Lemma 5.4, it follows

$$\begin{array}{*{20}l} \Gamma(s,t) = Cov{X_{s},X_{t}} &= \mathbb{E}\left(V^{-\frac{2}{p}}\right) \mathbb{E}\left(Z_{s} Z_{t}\right) \\ &= \left\{\begin{array}{ll} \mathbb{E}\left(V^{-\frac{2}{p}}\right) \sigma_{g_{PE}^{(p)}}^{2} \cdot S(t) & \text{ if } s=t \\ 0 & \text{ else} \end{array}\right.. \end{array} $$

The equation \(\mathbb {E}\left (V^{-\frac {2}{p}}\right) \sigma _{g_{PE}^{(p)}}^{2} = \sigma _{g_{SMN;G}^{(p)}}^{2}\phantom {\dot {i}\!}\) yields the asserted result. □

Proof

Proof of Theorem 4.4of Theorem 4.4 Let X be strictly stationary. Then, for all t1I and \(h \in H_{t_{1}} = \{ h\in \mathbb {R} \colon t_{1}+h \in I\}\), the distributions SMAN1,p(m(t1),S(t1),G) of \(X_{t_{1}}\) and SMAN1,p(m(t1+h),S(t1+h),G) of \(X_{t_{1}+h}\) are equal. If S(t1)=0, the distribution of \(X_{t_{1}}\) is the univariate Dirac distribution in m(t1) which can be considered to be the scale mixture of the univariate kapec Gaussian distribution with k=0, location parameter m(t1) and scale parameter 0. Therefore, for all \(h \in H_{t_{1}}\), \(\mathfrak {L}\left (X_{t_{1}+h}\right)\) is the univariate Dirac distribution in m(t1), too, and it follows S(t1+h)=0=S(t1) for all \(h \in H_{t_{1}}\). Thus, S=0I. Since \(\mathfrak {L}\left (X_{t_{1}+h}\right) = SMAN_{1,p}(m(t_{1}+h),0,G)\) is defined to be the Dirac distribution in m(t1+h), it follows m(t1+h)=m(t1) for all \(h \in H_{t_{1}}\), i.e. m is constant on I. If S(t1)>0, according to “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section, for all t1I and \(h \in H_{t_{1}}\), \(\mathfrak {L}\left (X_{t_{1}}\right)\) and \(\mathfrak {L}\left (X_{t_{1}+h}\right)\) have pdfs

$$\begin{array}{*{20}l} f_{X_{t_{1}}}(x) &= \frac{C_{p}}{\sqrt{S(t_{1})}} \: g_{SMN;G}^{(1,p)}\!\left(\left| \frac{x-m(t_{1})}{\sqrt{S(t_{1})}} \right|\right), \quad x\in\mathbb{R}, \\ f_{X_{t_{1}+h}}(x) &= \frac{C_{p}}{\sqrt{S(t_{1}+h)}} \: g_{SMN;G}^{(1,p)}\!\left(\left| \frac{x-m(t_{1}+h)}{\sqrt{S(t_{1}+h)}} \right|\right), \quad x\in\mathbb{R}, \end{array} $$

respectively, where \(C_{p} = \frac { p^{1-\frac {1}{p}} }{ 2\Gamma \!\left (\frac {1}{p}\right) }\). Because \(\mathfrak {L}\left (X_{t_{1}}\right) = \mathfrak {L}\left (X_{t_{1}+h}\right)\), we have \(f_{X_{t_{1}}} = f_{X_{t_{1}+h}}\), too. As \(f_{X_{t_{1}}}\) and \(f_{X_{t_{1}+h}}\), \(h \in H_{t_{1}}\), are symmetric with respect to the straight lines x=m(t1) and x=m(t1+h), respectively, being parallel to the ordinate axis, it follows m(t1)=m(t1+h) for all t1I and \(h \in H_{t_{1}}\). Thus, m is constant on I. Furthermore, since \(f_{X_{t_{1}}}(m(t_{1})) = \frac {C_{p}}{\sqrt {S(t_{1})}}\) and \(f_{X_{t_{1}+h}}(m(t_{1}+h)) = \frac {C_{p}}{\sqrt {S(t_{1}+h)}}\), the identity of these pdfs implies S(t1)=S(t1+h) for all t1I and \(h \in H_{t_{1}}\). Thus, the constancy of S on I is shown. The other direction of this proof is omitted, here. □

Proof

Proof of Theorem 4.5of Theorem 4.5 Let assume 1). According to Theorem 4.4, the constancy of m and S yields strict stationarity of X. Moreover, according to Theorem 4.3, it follows by the existence of expectation of \(V^{-\frac {2}{p}}\) that X is a second order random process having expectation function m and covariance function Γ given by \(\Gamma (t,t) = \sigma _{g_{SMN;G}^{(p)}}^{2} S(t)\) for all tI and Γ(s,t)=0 for all s,tI with st. Because of m(t)=μ and S(t)=δ for all tI, the expectation function of X is constantly equal to μ and the covariance function of X satisfies \(\Gamma (t,t) = \sigma _{g_{SMN;G}^{(p)}}^{2} \delta \) for all tI and Γ(s,t)=0 for all s,tI with st. Thus, 1) implies 2). Further, every strictly stationary second order random process is weakly stationary and the covariance function Γ of X from 2) evaluated in (s,t)I×I is representable as a function only depending on the difference st since it follows from the property 3) of function K that \(\Gamma (t,t) = \sigma _{g_{SMN;G}^{(p)}}^{2} \delta = K(0) = K(t-t)\) for all tI as well as Γ(s,t)=0=K(st) for all s,tI with st. Thus, the implication from 2) to 3) is shown. Additionally, it follows from 3) that CovXs,Xt=Γ(s,t)=0 for all s,tI with st and \(\mathbb {E}(X_{t}) = m(t) = \mu \) as well as \(Var(X_{t}) = \Gamma (t,t) = \sigma _{g_{SMN;G}^{(p)}}^{2} \delta \) for all tI. Hence, assuming 3), random variables Xt, tI, are uncorrelated and have constant expectation μ and variance \(\sigma _{g_{SMN;G}^{(p)}}^{2} \delta \). Thus, 4) follows from 3). Finally, let us assume 4) to hold. According to Theorem 4.3, X is a second order random process if \(\mathbb {E}\left (V^{-\frac {2}{p}}\right)\) is finite. Furthermore, because of the definition of white noise as in 4), it holds \(m(t) = \mathbb {E}(X_{t}) = \mu \) as well as \(\sigma _{g_{SMN;G}^{(p)}}^{2} S(t) = Cov(X_{t},X_{t}) = Var(X_{t}) = \sigma _{g_{SMN;G}^{(p)}}^{2} \delta \) for all tI. Then, m and S are constantly equal to μ and δ, respectively. Thus, the implication from 4) to 1) is shown. □

Finally, the proof of Theorem 4.6 is based on Lemma 5.6. In preparation for the proof of this lemma, we establish the following special case.

Lemma 5.5

Let XAECn,p (μ,D,g(k,p)) with D= diag (d1,…,dn) having nonnegative diagonal elements and positive rank k. Further, let be \(b \in \mathbb {R}^{n}\) and \(\Gamma = \text { diag }\!\left (\gamma _{1},\ldots,\gamma _{n} \right) \in \mathbb {R}^{n \times n}\) such that \(\Gamma D \Gamma = \text { diag }\!\left (\gamma _{1}^{2} d_{1},\ldots,\gamma _{n}^{2} d_{n} \right)\) has positive rank kΓ≥1. Then,

$$ \mathfrak{L}(\Gamma X + b) = AEC_{n,p}\!\left(\Gamma\mu + b, \Gamma D \Gamma, g_{(k)}^{(k_{\Gamma},p)} \right). $$

Proof

Assuming \(\gamma _{m_{\epsilon }} \neq 0\) for ε=1,…,l and \(\gamma _{m_{\epsilon }} = 0\) for ε=l+1,…,n where m1<m2<…<ml and ml+1<ml+2<…<mn, and using notations from “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section, it follows that

$$\Gamma W_{1} \sqrt{S_{1}} = \left(\begin{array}{c} \gamma_{1} {e_{1}^{(n)}}^{\text{\texttt{T}}} \\ \vdots \\ \gamma_{n} {e_{n}^{(n)}}^{\text{\texttt{T}}} \end{array}\right) \! \left(\begin{array}{ccc} \sqrt{d_{i_{1}}} e_{i_{1}}^{(n)} & \cdots & \sqrt{d_{i_{k}}} e_{i_{k}}^{(n)} \end{array}\right) = \left(\begin{array}{c} \gamma_{1} f(1) \\ \vdots \\ \gamma_{n} f(n) \end{array}\right) \in \mathbb{R}^{n \times k} $$

where

$$f(\eta) = \left\{ \begin{array}{ll} \sqrt{d_{i_{j}}} {e_{j}^{(k)}}^{\text{\texttt{T}}} & \quad \text{ if}\ \eta=i_{j} \text{ for a}\ j\in\{1,\ldots,k\} \\ 0_{k}^{\text{\texttt{T}}} & \quad \text{ else } \end{array}\right., \quad \eta=1,\ldots,n. $$

Since γη=0 for η{ml+1,…,mn}, there holds

$$\Gamma W_{1} \sqrt{S_{1}} = \left(\begin{array}{c} h(1) \\ \vdots \\ h(n) \end{array}\right) \in \mathbb{R}^{n \times k} $$

where

$$h(\eta) = \left\{ \begin{array}{ll} \gamma_{\eta} \sqrt{d_{\eta}} {e_{j}^{(k)}}^{\text{\texttt{T}}} & \quad \text{if}\ \eta \in K \text{and}\ \eta=i_{j}\ \text{for a}\ j\in\{1,\ldots,k\} \\ 0_{k}^{\text{\texttt{T}}} & \quad \text{ else } \end{array}\right., $$

η=1,…,n, and

$$ K = \left\{ \eta \colon \eta=i_{j}\ \text{for a}\ j\in\{1,\ldots,k\}\ \text{and}\ \eta=m_{\epsilon}\ \text{for a}\ \epsilon\in\{1,\ldots,l\} \right\}. $$
(11)

Then, |K|≥1 because of rk(ΓDΓ)≥1, and \(\Gamma W_{1} \sqrt {S_{1}}\) has |K| columns being the product of a positive constant, a constant from \(\mathbb {R}\backslash \{0\}\) and a unit vector of \(\mathbb {R}^{k}\). Particularly, all these unit vectors differ from each other and, using the notation δim of Kronecker’s Delta, we have

$$\left|K\right| = \sum\limits_{j=1}^{k}{\sum\limits_{\epsilon=1}^{l}{ \delta_{i_{j} m_{\epsilon}} }}. $$

Hence, \(\Gamma W_{1} \sqrt {S_{1}}\) has k−|K| columns being 0n. For \(Y = \left (Y_{1},\ldots,Y_{k}\right)^{\text {\texttt {T}}} \sim \Phi _{g^{(k,p)}}\), it follows that

$$\Gamma W_{1} \sqrt{S_{1}} Y = \left(\begin{array}{c} \theta(1) \\ \vdots \\ \theta(n) \end{array}\right) \in \mathbb{R}^{n} $$

where

$$\theta(\eta) = \left\{ \begin{array}{ll} \gamma_{\eta} \sqrt{d_{\eta}} Y_{j} & \quad\ \text{if}\ \eta \in K \text{ and}\ \eta=i_{j} \text{for a}\ j\in\{1,\ldots,k\} \\ 0 & \quad \text{ else } \end{array}\right., $$

η=1,…,n, and the vector \(\Gamma W_{1} \sqrt {S_{1}} Y\) consists of |K| different components of Y. Thus, for \(B\in \mathfrak {B}^{n}\), we have

$$P\!\left(\Gamma W_{1} \sqrt{S_{1}} Y \in B \right) = P\left(\left(\begin{array}{c} \theta(1) \\ \vdots \\ \theta(n) \end{array}\right) \in B \;, \; Y_{j}\in\mathbb{R} \text{ for all } j\in\{1,\ldots,k\} \backslash J \right) $$

where J={j{1,…,k}:ijK}. Now, let

$$J = \left\{ j_{1},\ldots,j_{|K|} \right\} \qquad\text{ with } j_{1} < j_{2} < \ldots < j_{|K|} $$

be an enumeration of the elements of J and

$$M = \left(\begin{array}{c} \psi(1) \\ \vdots \\ \psi(n) \end{array}\right) \in \mathbb{R}^{n \times \left|K\right|} $$

where

$$\psi(\eta) = \left\{ \begin{array}{ll} \gamma_{\eta} \sqrt{d_{\eta}} {e_{\kappa}^{(|K|)}}^{\text{\texttt{T}}} & \quad \text{ if } \eta \in K \text{and}\ \eta=i_{j_{\kappa}} \text{ for a } \kappa\in\{1,\ldots,|K|\} \\ 0_{|K|}^{\text{\texttt{T}}} & \quad \text{ else } \end{array}\right. $$

for η=1,…,n. Then, |J|=|K| and \(\Gamma W_{1} \sqrt {S_{1}} Y \stackrel {d}{=} MZ\) for \(Z \sim \Phi _{g_{(k)}^{(|K|,p)}}\phantom {\dot {i}\!}\). Thus, because of rk(M)=|K|, it follows

$$\begin{array}{*{20}l} \Gamma X + b & \stackrel{d}{=} \left(\Gamma\mu+b\right) + \Gamma W_{1} \sqrt{S_{1}} Y, \quad Y \sim \Phi_{g^{(k,p)}}\\ &\stackrel{d}{=} \left(\Gamma\mu+b\right) + M Z, \quad Z \sim \Phi_{g_{(k)}^{(|K|,p)}} \\ &= AEC_{n,p}\!\left(\Gamma\mu + b, MM^{\text{\texttt{T}}}, g_{(k)}^{(|K|,p)} \right). \end{array} $$

Note that M can be extended to \(\Gamma W_{1} \sqrt {S_{1}}\) by adding k−|K| zero columns. Therefore,

$$MM^{\text{\texttt{T}}} = \left(\Gamma W_{1} \sqrt{S_{1}}\right)\!\left(\Gamma W_{1} \sqrt{S_{1}}\right)^{\text{\texttt{T}}} = \Gamma W_{1} S_{1} W_{1}^{\text{\texttt{T}}} \Gamma = \Gamma D \Gamma, $$

and |K|=rk(M)=rk(MMT)=rk(ΓDΓ). Finally, this yields

$$\mathfrak{L}(\Gamma X + b) = AEC_{n,p}\!\left(\Gamma\mu + b, \Gamma D \Gamma, g_{(k)}^{(k_{\Gamma},p)} \right). $$

Using this particular result, we prove the following more general one.

Lemma 5.6

Let XAECn,p (μ,D,g(k,p)) with D= diag (d1,…,dn) having nonnegative diagonal elements and rank k≥0. Further, let be \(b \in \mathbb {R}^{n}\) and \(\Gamma = \text { diag }\!\left (\gamma _{1},\ldots,\gamma _{n} \right) \in \mathbb {R}^{n \times n}\). Then,

$$\mathfrak{L}(\Gamma X + b) = AEC_{n,p}\!\left(\Gamma\mu + b, \Gamma D \Gamma, g_{(k)}^{(k_{\Gamma},p)} \right), $$

where \(\Gamma D \Gamma = \text { diag }\!\left (\gamma _{1}^{2} d_{1},\ldots,\gamma _{n}^{2} d_{n} \right)\) and kΓ=rk(ΓDΓ)≥0.

Proof

Proof of Lemma 5.6of Lemma 5.6 Let k=0, that is XAECn,p (μ,0n×n,g(0,p)). Then, ΓX+b follows the Dirac distribution in Γμ+b. Using the exchangeability of g(0,p) and \(g_{(0)}^{(0,p)}\), we have

$$\begin{array}{*{20}l} \mathfrak{L}(\Gamma X+b) &= AEC_{n,p}\!\left(\Gamma\mu+b,0_{n \times n},g^{(0,p)}\right) \\ &= AEC_{n,p}\!\left(\Gamma\mu+b,\Gamma 0_{n \times n} \Gamma,g_{(0)}^{(0,p)}\right). \end{array} $$

If D has positive rank and Γ is assumed to satisfy kΓ=rk(ΓDΓ)≥1, the assertion coincides with the result of Lemma 5.5. Finally, let D have positive rank and Γ be assumed to satisfy kΓ=rk(ΓDΓ)=0 and ΓDΓ=0n×n, respectively. In Analogy to the proof of Lemma 5.5 and using the same notations, the set K in (11) is empty. Then, |K|=0, \(\Gamma W_{1} \sqrt {S_{1}}\) consists only of zero columns, and, for \(Y \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\) and every \(B\in \mathfrak {B}^{n}\), we have

Particularly, if B={0n}, it follows that

$$P\!\left(\Gamma W_{1} \sqrt{S_{1}} Y = 0_{n} \right) = P\!\left(\Gamma W_{1} \sqrt{S_{1}} Y \in \{0_{n}\} \right)=1. $$

Thus, \(\Gamma W_{1} \sqrt {S_{1}} Y = 0_{n}\)P-a.s., and the stochastic representation \(\Gamma X + b \stackrel {d}{=} (\Gamma \mu + b) + \Gamma W_{1} \sqrt {S_{1}} Y\) where \(Y \sim \Phi _{g^{(k,p)}}\phantom {\dot {i}\!}\) holds according to (1), yields

$$\Gamma X + b = \Gamma\mu + b \qquad P-\text{a.s.} $$

or, equivalently, \(\mathfrak {L}(\Gamma X + b) = AEC_{n,p}\!\left (\Gamma \mu +b,0_{n \times n},g_{(k)}^{(0,p)}\right)\). □

Proof

Proof of Theorem 4.6of Theorem 4.6 Let be \(n\in \mathbb {N}\) and J={t1,…,tn} an arbitrary subset of I. Moreover, let Yt=γ(t)Xt+b(t), tI, and Y={Yt}tI. Then, for \(Y_{J} = \left (Y_{t_{1}},\ldots,Y_{t_{n}}\right)^{\text {\texttt {T}}}\) and \(X_{J} = \left (X_{t_{1}},\ldots,X_{t_{n}}\right)^{\text {\texttt {T}}}\), we have

$$Y_{J} = \left(\begin{array}{c} Y_{t_{1}} \\ \vdots \\ Y_{t_{n}} \end{array}\right) = \left(\begin{array}{c} \gamma(t_{1}) X_{t_{1}} + b(t_{1}) \\ \vdots \\ \gamma(t_{n}) X_{t_{n}} + b(t_{n}) \end{array}\right) = \Gamma X_{J} + b $$

where b=(b(t1),…,b(tn))T and Γ= diag (γ(t1),…,γ(tn)). Since

$$\mathfrak{L}(X_{J}) = AEC_{n,p}\!\left(\mu,D,g_{SMN;G}^{(k,p)}\right) $$

where μ=(m(t1),…,m(tn))T and D= diag (S(t1),…,S(tn)) with k=rk(D), making use of Lemmata 4.3 and 5.5, it follows

$$\begin{array}{*{20}l} \mathfrak{L}(Y_{J}) = \mathfrak{L}(\Gamma X_{J} + b) &= AEC_{n,p}\!\left(\Gamma\mu + b, \Gamma D \Gamma, \left(g_{SMN;G}^{(k,p)}\right)_{(k)}^{(k_{\Gamma},p)} \right) \\ &= AEC_{n,p}\!\left(\Gamma\mu + b, \Gamma D \Gamma, g_{SMN;G}^{(k_{\Gamma},p)} \right). \end{array} $$

Thus, \(AEC_{n,p}\!\left (\Gamma \mu + b, \Gamma D \Gamma, g_{SMN;G}^{(k_{\Gamma },p)} \right)\) is the fdd of Y corresponding to YJ. Finally, because of Γμ+b=([γm+b](t1),…,[γm+b](tn))T and ΓDΓ= diag ([γ2S](t1),…,[γ2S](tn)), we get

$$Y \sim SMAGP_{p}\left(\gamma m + b, \gamma^{2} S, G \right). $$

Discussion

In the present paper, first, kapec distributions are introduced and their properties such as stochastic representations, moments, and density-like representations are studied. Secondly, based on the Kolmogorov existence theorem, the existence of random processes having apec fdds with arbitrary location and scale functions and a consistent sequence of dgs of p-generalized spherical distributions is shown. Particularly, a sequence of dgs of scale mixtures of multivariate p-generalized Gaussian distributions with one and the same mixture distribution is consistent and the corresponding processes are p-generalizations of elliptical random processes having axis-aligned fdds, see Yao (1973) and Kano (1994) for the case of p=2. Thirdly, the question is answered when an n-dimensional kapec distribution with dg g(k,p) is representable as a scale mixture of n-dimensional kapec Gaussian distribution for a suitable mixture distribution of a positive random variable. It is established that the complete monotony of the composition h of g(k,p) with the pth root function is a necessary and sufficient condition for such representation and that the inverse Laplace-Stieltjes transform of h is connected to the cdf of the mixture distribution. For the particular case p=2, the univariate consideration is covered by Andrews and Mallows (1974) and the multivariate one by Lange and Sinsheimer (1993) and Gómez-Sánchez-Manzano et al. (2006), respectively.

Appendix 1: Further aspects of simulations

7.1 Algorithms to simulate apec distributions

The following two algorithms to simulate XAECn,p (μ,D,g(k,p)) are based on the two stochastic representations of X, see Lemmata 2.1 and 2.4. In both cases, let \(\tilde {X} \sim \Phi _{g^{(k,p)}}\) and use notations from “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section.

Algorithm 1 1) Generation of a random vector \(U_{p}^{(k)}\) following the k-dimensional p-generalized uniform distribution on Sk,p:

a) Generate \(\tilde {Z} = \left (\tilde {Z}_{1},\ldots,\tilde {Z}_{k}\right)^{\text {\texttt {T}}}\) following the k-dimensional p-generalized Gaussian distribution by generating k independent and identically univariate p-generalized Gaussian distributed random variables \(\tilde {Z}_{1},\ldots,\tilde {Z}_{k}\).

b) Compute \(R_{\tilde {Z}} = |\tilde {Z}|_{p}\) and \(U_{p}^{(k)} = \frac {\tilde {Z}}{R_{\tilde {Z}}}\).

2) Generate \(R_{\tilde {X}}\) having pdf \(f_{R_{\tilde {X}}}(r) = \omega _{k,p} \, r^{k-1} g^{(k,p)}(r) \; \mathbbm{1}_{[0,\infty)}(r)\), \(r\in \mathbb {R}\), and being a univariate random radius variable.

3)Compute \(\tilde {X} = R_{\tilde {X}} \, U_{p}^{(k)}\) and \(X = \mu + W_{1}\sqrt {S_{1}} \tilde {X}\).

Algorithm 2 1) Generation of the random radius and angle variables according to Lemma 2.4: Generate

a) R with \(f_{R}(r) = \omega _{k,p} \, r^{k-1} g^{(k,p)}(r) \, \mathbbm{1}_{[0,\infty)}(r)\), \(r\in \mathbb {R}\),

b) Ψi with \(f_{\Psi _{i}}(\psi _{i}) = \frac {\omega _{k-i,p}}{\omega _{k-i+1,p}} \frac { \left (\sin (\psi _{i})\right)^{k-i-1} }{ \left (N_{p}(\psi _{i})\right)^{k-i+1}} \, \mathbbm{1}_{[0,\pi)}(\psi _{i})\), \(\psi _{i}\in \mathbb {R}\), for i=1,…,k−2,

c) Ψk−1 with \(f_{\Psi _{k-1}}(\psi _{k-1}) = \frac {1}{\omega _{2,p}} \frac { 1 }{ \left (N_{p}(\psi _{k-1})\right)^{2}} \, \mathbbm{1}_{[0,2\pi)}(\psi _{k-1})\), \(\psi _{k-1}\in \mathbb {R}\).

2) Compute \(\tilde {X} = SPH_{p}^{(k)}(R,\Psi _{1},\ldots,\Psi _{k-1})\) and \(X = \mu + W_{1}\sqrt {S_{1}} \tilde {X}\).

For the particular case of simulating XSMANn,p(μ,D,G) where the mixture cdf G is explicitly known in a closed form, the following algorithm can be used. This is based on (8) and Lemma 4.1 where \(\tilde {X} \sim SMN_{k,p}(G)\) and notations from “The class of n-dimensional rank-k-continuous axis-aligned p-generalized elliptically contoured distributions” section are used.

Algorithm 3 1) Generate \(\tilde {Z} = \left (\tilde {Z}_{1},\ldots,\tilde {Z}_{k}\right)^{\text {\texttt {T}}}\) following the k-dimensional p-generalized spherical Gaussian distribution by generating k independent and identically univariate p-generalized Gaussian distributed random variables \(\tilde {Z}_{1},\ldots,\tilde {Z}_{k}\).

2) Generate independently a univariate random variable V having cdf G.

3) Compute \(\tilde {X} = V^{-\frac {1}{p}} \cdot \tilde {Z}\) and \(X = \mu + W_{1}\sqrt {S_{1}} \tilde {X}\).

7.2 Simulation of p-generalized Student as well as p-generalized Slash processes

According to the method described in “Simulation” section, but simulating a 201-dimensional apec Student-t and Slash distributed random vector with the help of an algorithms from Appendix 7.1 instead of 201 independent univariate p-generalized Gaussian variables, we get approximates of trajectories of p-generalized Student-t and p-generalized Slash processes having axis-aligned fdds. Particularly, approximate realizations of AStPp (0[0,1],1[0,1],ν) as well as of ASlPp (0[0,1],1[0,1],ν) for ν{1,3,10} and \(p=\frac {1}{2}\) and p=3, respectively, are visualized in Figs. 4 and 5. Note that our considerations are restricted to location function 0[0,1] and scale function S=1[0,1] while the effects of varying location and scale functions are already shown in Fig. 3. Furthermore, on the one hand, notice that the height of amplitudes of the realizations of AStPp (0[0,1],1[0,1],ν) and ASlPp (0[0,1],1[0,1],ν), respectively, increases if p>0 decreases or ν>0 increases. On the other hand, the effects that scales of axes are highly dependent on the specific realization of a trajectory of the process have to be in mind, too.

Fig. 4
figure 4

Simulation of AStPp (0[0,1],1[0,1],1) (left), AStPp (0[0,1],1[0,1],3) (center), and AStPp (0[0,1],1[0,1],10) (right)

Fig. 5
figure 5

Simulation of ASlPp (0[0,1],1[0,1],1) (left), ASlPp (0[0,1],1[0,1],3) (center), and ASlPp (0[0,1],1[0,1],10) (right)

Abbreviations

apec:

Axis-aligned p-generalized elliptically contoured

cdf:

Cumulative distribution function

dg:

density generator

fdd:

Finite dimensional distribution

fdds:

family of finite dimensional distributions

kapec:

Rank-k-continuous axis-aligned p-generalized elliptically contoured

pdf:

Probability density function

References

Download references

Acknowledgements

We are grateful to the Reviewers and the Associate Editor for their valuable hints.

Funding

Not applicable.

Availability of data and materials

Not applicable.

Authors’ information (optional)

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Wolf-Dieter Richter.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Müller, K., Richter, WD. On p-generalized elliptical random processes. J Stat Distrib App 6, 1 (2019). https://doi.org/10.1186/s40488-019-0090-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40488-019-0090-6

Keywords