1 Introduction

Gaussian processes (GPs) extend multivariate Gaussian distributions over finite dimensional vectors to infinite dimensionality. Specifically, a GP defines a distribution over functions, that is each draw from a Gaussian process is a function. Therefore, GPs provide a principled, practical, and probabilistic approach to nonparametric regression and classification and they have successfully been applied to different domains (Rasmussen and Williams 2006).

GPs have several desirable mathematical properties. The most appealing one is that, for regression with Gaussian noise, the prior distribution is conjugate for the likelihood function. Therefore the Bayesian update step is analytic, as is computing the predictive distribution for the function behavior at unknown locations. In spite of their success, GPs have several known shortcomings.

First, the Gaussian distribution is not a “heavy-tailed” distribution, and so it is not robust to extreme outliers. Alternative to GPs have been proposed of which the most notable example is represented by the class of elliptical processes (Fang 2018), such as Student-t processes (O’Hagan 1991; Zhang et al. 2007), where any collection of function values has a desired elliptical distribution, with a covariance matrix built using a kernel.

Second, the Gaussian distribution is symmetric with respect to its mean. This implies, for instance, that its mean and median coincide, while the mean and median in an asymmetric (skewed) distribution can be different numbers. This constraint limits GPs’ flexibility and affects the coverage of their credible intervals (regions)—especially when considering that symmetry must hold for all components of the (latent) function and that, as for instance discussed by Kuss and Rasmussen (2005); Nickisch and Rasmussen (2008), the exact posterior of a GP classifier is skewed.

To overcome this second limitation, in this paper, we propose skew-Gaussian processes (SkewGPs) as a non-parametric prior over functions. A SkewGP extends the multivariate unified skew-normal distribution defined over finite dimensional vectors to a stochastic process, i.e. a distribution over infinite dimensional objects. A SkewGP is completely defined by a location function, a scale function and three additional parameters that depend on a latent dimension: a skewness function, a truncation vector and a covariance matrix. It is worth noting that a SkewGP reduces to a GP when the latent variables have dimension zero. Therefore, SkewGPs inherit all good properties of GPs and increase their flexibility by allowing asymmetry in the probabilistic model.

We focus on applying this new nonparametric model to a classification problem. In the case of parametric models, Durante (2019) shows that the posterior distribution of a probit likelihood and Gaussian prior has a unified skew-normal distribution. Such a novel result allowed the author to efficiently compute full posterior inferences for Bayesian logistic regression (for small datasets \(n\approx 100\)). Moreover the author showed that the unified skew-normal distribution is a conjugate prior for the probit likelihood (without using this prior model for data analysis).

Here we extend this result to the nonparametric case, we derive a semi-analytical expression for the posterior distribution of the latent function and predictive probabilities for SkewGPs. The term semi-analytical is adopted to indicate that posterior inferences require the computation of the cumulative distribution function of a multivariate Gaussian distribution (i.e., the computation of Gaussian orthant probabilities). By using a new formulation (Gessner et al. 2019) of elliptical slice sampling (Murray et al. 2010), lin-ess, which permits efficient sampling from a linearly constrained (e.g., orthant) Gaussian domain, we show that we can compute efficiently posterior inferences for SkewGP binary classifiers. Lin-ess is a special case of elliptical slice sampling that leverages the analytic tractability of intersections of ellipses and hyperplanes to speed up the elliptical slice algorithm. In particular, this guarantees rejection-free sampling and it is therefore also highly parallelizable.

The main contributions of this paper are

  1. 1.

    we propose a new class of stochastic processes called skew-Gaussian processes (SkewGP) which generalize GP models;

  2. 2.

    we show that a SkewGP prior process is conjugate for the probit likelihood thus deriving for the first time the posterior distribution of a GP classifier in an analytic form;

  3. 3.

    we derive an efficient way to learn the hyperparameters of SkewGP and compute Monte Carlo predictions using lin-ess, showing that our model has similar bottleneck computational complexity of GPs;

  4. 4.

    we evaluate the proposed SkewGP classifier against state-of-the-art implementations of the GP classifier which approximate the posterior with the Laplace method or with Expectation propagation;

  5. 5.

    we show on a small image classification dataset that a SkewGP prior can lead to better uncertainty quantification than a GP prior.

2 Background

The skew normal distribution is a continuous probability distribution that generalises the normal distribution to allow for non-zero skewness. The probability density function (PDF) of the univariate skew-normal distribution with location \(\xi \in \mathbb {R}\), scale \(\sigma >0\) and skew parameter \(\alpha \in \mathbb {R}\) is given by O’Hagan and Leonard (1976):

$$\begin{aligned} p(z)={\frac{2}{\sigma }}\phi \left( {\frac{z-\xi }{\sigma }}\right) \varPhi \left( \alpha \left( {\frac{z-\xi }{\sigma }}\right) \right) , \quad z \in \mathbb {R} \end{aligned}$$

where \(\phi\) and \(\varPhi\) are the PDF and, respectively, cumulative distribution function (CDF) of the standard univariate normal distribution. This distribution has been generalised in several ways, see Azzalini (2013) for a review. In particular, Arellano and Azzalini (2006) provided a unification of the above generalizations within a single and tractable multivariate unified skew-normal distribution that satisfies closure properties for marginals and conditionals and allows more flexibility due the introduction of additional parameters.

2.1 Unified skew-normal distribution

Fig. 1
figure 1

Density plots for \(\text {SUN}_{1,s}(0,1,\varDelta ,\gamma ,\varGamma )\). For all plots \(\varGamma\) is a correlation matrix, \(\gamma = 0\), dashed lines are the contour plots of \(y \sim N_1(0,1)\)

Fig. 2
figure 2

Contour density plots for four unified skew-normal. For all plots \(p=2\), \(\varvec{\xi }=[0,0]^T\), \(\varOmega\) and \(\varGamma\) are correlation matrices with \(\varOmega _{1,2}= 0.88\), \(\gamma = 0\), dashed lines are the contour plots of \(y \sim N_2(\varvec{\xi },\varOmega )\)

A vector \(\varvec{z}\in \mathbb {R}^p\) is said to have a multivariate unified skew-normal distribution with latent skewness dimension s, \(\varvec{z}\sim \text {SUN}_{p,s}(\varvec{\xi },\varOmega ,\varDelta ,\varvec{\gamma },\varGamma )\), if its probability density function (Azzalini 2013, Ch.7) is:

$$\begin{aligned} p(\varvec{z}) = \phi _p(\varvec{z}-\varvec{\xi };\varOmega )\frac{\varPhi _s\left( \varvec{\gamma }+\varDelta ^T\bar{\varOmega }^{-1}D_\varOmega ^{-1}(\varvec{z}-\varvec{\xi });\varGamma -\varDelta ^T\bar{\varOmega }^{-1}\varDelta \right) }{\varPhi _s\left( \varvec{\gamma };\varGamma \right) } \end{aligned}$$
(1)

where \(\phi _p(\varvec{z}-\varvec{\xi };\varOmega )\) represents the PDF of a multivariate normal distribution with mean \(\varvec{\xi }\in \mathbb {R}^p\) and covariance \(\varOmega =D_\varOmega \bar{\varOmega } D_\varOmega \in \mathbb {R}^{p\times p}\), with \(\bar{\varOmega }\) being a correlation matrix and \(D_\varOmega\) a diagonal matrix containing the square root of the diagonal elements in \(\varOmega\). The notation \(\varPhi _s(\varvec{a};M)\) denotes the CDF of \(N_s(0,M)\) evaluated at \(\varvec{a}\in \mathbb {R}^s\). The parameters \(\varvec{\gamma }\in \mathbb {R}^s, \varGamma \in \mathbb {R}^{s\times s},\varDelta ^{p \times s}\) of the SUN distribution are related to a latent variable that controls the skewness, in particular \(\varDelta\) is called Skewness matrix. The PDF (1) is well-defined provided that the matrix

$$\begin{aligned} M:=\begin{bmatrix} \varGamma &{} \varDelta ^T\\ \varDelta &{} \bar{\varOmega } \end{bmatrix} \in \mathbb {R}^{(s+p)\times (s+p)}>0, \end{aligned}$$
(2)

i.e., M is positive definite. Note that when \(\varDelta =0\), (1) reduces to \(\phi _p(\varvec{z}-\varvec{\xi };\varOmega )\). Moreover we assume that \(\varPhi _0(\cdot )=1\), so that, for \(s=0\), (1) becomes a multivariate normal distribution.

The rôle of the latent dimension s can be briefly explained as follows. Consider now a random vector \(\begin{bmatrix} \varvec{x}_0 \\ \varvec{x}_1 \end{bmatrix} \sim N_{s+p}(0,M)\) with M as in (2) and define \(\mathbf {y}\) as the vector with distribution \((\varvec{x}_1 \mid \varvec{x}_0+\varvec{\gamma }>0)\), then it can be shown (Azzalini 2013, Ch. 7) that \(\varvec{z}= \varvec{\xi }+ D_\varOmega \mathbf {y}\sim \text {SUN}_{p,s}(\varvec{\xi },\varOmega ,\varDelta ,\varvec{\gamma },\varGamma )\). This representation will be used in Sect. 5 to draw samples from the distribution. Figure 1 shows the density of a univariate SUN distribution with latent dimensions \(s=1\) (a1) and \(s=2\) (a2). The effect of a higher latent dimension can be better observed in bivariate SUN densities as shown in Fig. 2. The contours of the corresponding bivariate normal are dashed. We also plot the skewness directions given by \(\bar{\varOmega }^{-1}\varDelta\).

The skew-normal family has several interesting properties, see Azzalini (2013, Ch.7) for details. Most notably, it isr close under marginalization and affine transformations. Specifically, if we partition \(z = [z_1 , z_2]^T\), where \(z_1 \in \mathbb {R}^{p_1}\) and \(z_2 \in \mathbb {R}^{p_2}\) with \(p_1+p_2=p\), then

$$\begin{aligned} \begin{array}{c} z_1 \sim SUN_{p_1,s}(\varvec{\xi }_1,\varOmega _{11},\varDelta _1,\varvec{\gamma },\varGamma ), \\ \text {with }\,\, \varvec{\xi }=\begin{bmatrix} \varvec{\xi }_1\\ \varvec{\xi }_2 \end{bmatrix},\,\,\, \varDelta =\begin{bmatrix} \varDelta _1\\ \varDelta _2 \end{bmatrix},\,\,\, \varOmega =\begin{bmatrix} \varOmega _{11} &{} \varOmega _{12}\\ \varOmega _{21} &{} \varOmega _{22} \end{bmatrix}. \end{array} \end{aligned}$$
(3)

Moreover, (Azzalini 2013, Ch.7) the conditional distribution is a unified skew-normal, i.e., \((Z_2|Z_1=z_1) \sim SUN_{p_2,s}(\varvec{\xi }_{2|1},\varOmega _{2|1},\varDelta _{2|1},\varvec{\gamma }_{2|1},\varGamma _{2|1})\), where

$$\begin{aligned} \varvec{\xi }_{2|1}&:=\varvec{\xi }_{2}+\varOmega _{21}\varOmega _{11}^{-1}(z_1-\varvec{\xi }_1), \quad \varOmega _{2|1} := \varOmega _{22}-\varOmega _{21}\varOmega _{11}^{-1}\varOmega _{12},\\ \varDelta _{2|1}&:=\varDelta _2 -\bar{\varOmega }_{21}\bar{\varOmega }_{11}^{-1}\varDelta _1,\\ \varvec{\gamma }_{2|1}&:=\varvec{\gamma }+\varDelta _1^T \varOmega _{11}^{-1}(z_1-\varvec{\xi }_1), \quad \varGamma _{2|1}:=\varGamma -\varDelta _1^T\bar{\varOmega }_{11}^{-1}\varDelta _1, \end{aligned}$$

and \(\bar{\varOmega }_{11}^{-1}:=(\bar{\varOmega }_{11})^{-1}\).

3 Skew-Gaussian process

In this section, we define a Skew-Gaussian Process (SkewGP). Consider the functions \(\xi :\mathbb {R}^p \rightarrow \mathbb {R}\), a location function, \(\varOmega : \mathbb {R}^p \times \mathbb {R}^p \rightarrow \mathbb {R}\), a scale function, \(\varDelta :\mathbb {R}^p \rightarrow \mathbb {R}^s\), the Skewness vector function, and \(\varvec{\gamma }\in \mathbb {R}^s,\varGamma \in \mathbb {R}^{s \times s}\).

We say that a real function \(f: \mathbb {R}^p \rightarrow \mathbb {R}\) is distributed as a Skew-Gaussian process with latent dimension s, if, for any sequence of n points \(\varvec{x}_1,\dots ,\varvec{x}_n \in \mathbb {R}^p\), the vector \(f(X)=[f(\varvec{x}_1),\dots ,f(\varvec{x}_n)]^T\) is skew-Gaussian distributed with parameters \(\varvec{\gamma },\varGamma\), location, scale and skewness matrices, respectively, given by

$$\begin{aligned} \begin{array}{rl} \xi (X):=\begin{bmatrix} \xi (\varvec{x}_1)\\ \xi (\varvec{x}_2)\\ \vdots \\ \xi (\varvec{x}_n)\\ \end{bmatrix},\,\, \varOmega (X,X)&{}:= \begin{bmatrix} \varOmega (\varvec{x}_1,\varvec{x}_1) &{} \varOmega (\varvec{x}_1,\varvec{x}_2) &{}\dots &{} \varOmega (\varvec{x}_1,\varvec{x}_n)\\ \varOmega (\varvec{x}_2,\varvec{x}_1) &{} \varOmega (\varvec{x}_2,\varvec{x}_2) &{}\dots &{} \varOmega (\varvec{x}_2,\varvec{x}_n)\\ \vdots &{} \vdots &{}\dots &{} \vdots \\ \varOmega (\varvec{x}_n,\varvec{x}_1) &{} \varOmega (\varvec{x}_n,\varvec{x}_2) &{}\dots &{} \varOmega (\varvec{x}_n,\varvec{x}_n)\\ \end{bmatrix},\\ \varDelta (X)&{}:=\begin{bmatrix} \,\,\varDelta (\varvec{x}_1) &{} \quad \quad \varDelta (\varvec{x}_2) &{}\,\,\dots &{} \,\quad \varDelta (\varvec{x}_n)\\ \end{bmatrix}. \end{array} \end{aligned}$$
(4)

The skew-Gaussian distribution above is well defined provided that the matrix

$$\begin{aligned} \begin{bmatrix} \varGamma &{} \varDelta (X) \\ \varDelta (X)^T &{} \varOmega (X,X) \end{bmatrix} \end{aligned}$$

is positive definite for all \(X=\{\varvec{x}_1,\dots ,\varvec{x}_n\} \subset \mathbb {R}^p\) and for all 0. In this case we can write

$$\begin{aligned} f(\varvec{x}) \sim \text {SkewGP}_{s}(\xi (\varvec{x}),\varOmega (\varvec{x},\varvec{x}'),\varDelta (\varvec{x},\varvec{x}'),\varvec{\gamma },\varGamma ). \end{aligned}$$
(5)

We detail how to select the parameters in Sect. 4, the proposition below shows that SkewGP is a proper stochastic process.

Proposition 1

The construction of a Skew-Gaussian process from a finite-dimensional distribution is well-posed.

All the proofs are in “Appendices 1 and 2".

3.1 Binary classification

Consider the training data \(\mathcal {D}=\{(\varvec{x}_i,y_i)\}_{i=1}^n\), where \(\varvec{x}_i \in \mathbb {R}^p\) and \(y_i \in \{0,1\}\). We aim to build a nonparametric binary classifier. We first define a probabilistic model \(\mathcal {M}\) by assuming that \(f \sim \text {SkewGP}(\xi , \varOmega , \varDelta , \varvec{\gamma }, \varGamma )\) and considering a probit model for the likelihood:

$$\begin{aligned} \begin{aligned} p(\mathcal {D}|f)&=\prod \limits _{i=1}^n \varPhi (f(\varvec{x}_i);1)^{y_i}(1-\varPhi (f(\varvec{x}_i);1))^{1-y_i}=\prod \limits _{i=1}^n \varPhi ((2y_i-1)f(\varvec{x}_i);1)\\&=\varPhi _n(Wf(X);I_n), \end{aligned} \end{aligned}$$
(6)

where \(W=\text {diag}(2y_1-1,\dots ,2y_n-1)\). A SkewGP prior combined with a probit likelihood gives rise to a posterior SkewGP over functions, this because skew-Gaussian distributions are conjugate priors for probit models. In the finite dimensional parametric case, this property was shown by Durante (2019), hereafter we extend it to the nonparametric one.

Theorem 1

The posterior of f(X) is a skew-Gaussian distribution, i.e.

$$\begin{aligned} p(f(X)|\mathcal {D})&= \text {SUN}_{n,s+n}(\tilde{\xi },\tilde{\varOmega },\tilde{\varDelta },\tilde{\varvec{\gamma }},\tilde{\varGamma }) \end{aligned}$$
(7)
$$\begin{aligned} \tilde{\xi }&=\xi ,\quad \tilde{\varOmega } = \varOmega , \end{aligned}$$
(8)
$$\begin{aligned} \tilde{\varDelta }&=[\varDelta ,\,\,\bar{\varOmega }D_\varOmega W^T], \end{aligned}$$
(9)
$$\begin{aligned} \tilde{\varvec{\gamma }}&=[\varvec{\gamma },\,\,W\xi ]^T, \end{aligned}$$
(10)
$$\begin{aligned} \tilde{\varGamma }&=\begin{bmatrix} \varGamma &{} \,\,\varDelta ^T D_\varOmega W^T \\ WD_\varOmega \varDelta &{} \,\,(W\varOmega W^T + I_n) \end{bmatrix}, \end{aligned}$$
(11)

where, for simplicity of notation, we have denoted \(\xi (X),\varOmega (X,X),\varDelta (X)\) as \(\xi ,\varOmega ,\varDelta\) and \(\varOmega = D_\varOmega \bar{\varOmega } D_\varOmega\).

From Theorem 1 we can immediately derive the following result.

Corollary 1

The marginal likelihood of the observations \(\mathcal {D}=\{(\varvec{x}_i,y_i)\}_{i=1}^n\) given the probabilistic model \(\mathcal {M}\), that is the prior (5) and likelihood (6), is

$$\begin{aligned} p(\mathcal {D}|\mathcal {M}) = \frac{ \varPhi _{s+n}(\tilde{\varvec{\gamma }};\,\tilde{\varGamma })}{\varPhi _{s}(\varvec{\gamma };\,\varGamma )}, \end{aligned}$$
(12)

with \(\tilde{\varvec{\gamma }},\tilde{\varGamma }\) defined in Theorem 1.

In classification, based on the training data \(\mathcal {D}=\{(\varvec{x}_i,y_i)\}_{i=1}^n\), and given test inputs \(\varvec{x}^*\), we aim to predict the probability that \(y^*=1\).

Corollary 2

The posterior predictive probability of \(y^*=1\) given the test input \(\varvec{x}^* \in \mathbb {R}^p\) and the training data \(\mathcal {D}=\{(\varvec{x}_i,y_i)\}_{i=1}^n\) is

$$\begin{aligned} p(y^*=1|\varvec{x}^*,X,y) = \frac{\varPhi _{s+n+1}(\tilde{\varvec{\gamma }}^*;\,\tilde{\varGamma }^*)}{\varPhi _{s+n}(\tilde{\varvec{\gamma }};\,\tilde{\varGamma })}, \end{aligned}$$
(13)

where \(\tilde{\varvec{\gamma }}^*,\tilde{\varGamma }^*\) are the corresponding matrices of the posterior computed for the augmented dataset \(\hat{X}=[X^T,{\varvec{x}^*}^T]^T\), \(\hat{y}=[y^T,1/2]^T\).

Note that the dummy value \(\frac{1}{2}\) in \(\hat{y}\) does not influence the value of \(p(y^*=1 | \varvec{x}^*, X,y)\) and it was chosen only for mathematical convenience, as it allows for marginalization over \(f(\varvec{x}^*)\) and to derive the expression of \(\tilde{\varvec{\gamma }}^*,\tilde{\varGamma }^*\) similarly to the ones in Theorem 1.Footnote 1

4 Prior functions, parameters and hyperparameters

A SkewGP prior is completely defined by the location function \(\xi (\varvec{x})\), the scale function \(\varOmega (\varvec{x},\varvec{x}')\), the latent dimension \(s\in \mathbb {N}\), the skewness vector function \(\varDelta (\varvec{x},\varvec{x}') \in \mathbb {R}^s\) and \(\varvec{\gamma }\in \mathbb {R}^s,\varGamma \in \mathbb {R}^{s \times s}\). As it is common for GPs, we will take the location function \(\xi (\varvec{x})\) to be zero, although this need not be done. Let \(K(\varvec{x},\varvec{x}')\) be a positive definite covariance function (kernel) and let \(\varOmega = K(X,X)\) be the covariance matrix obtained by applying \(K(\varvec{x},\varvec{x}')\) elementwise to the training data X. In this paper, we propose the following way to define the location, scale and skewness functions of a SkewGP:

$$\begin{aligned} M= \begin{bmatrix} \varGamma &{} \varDelta (X,R)^T\\ \varDelta (X,R) &{} \bar{\varOmega }(X,X) \end{bmatrix} := \begin{bmatrix} L\bar{K}(R,R)L &{} \,\,L\bar{K}(R,X)\\ \bar{K}(X,R)L &{} \,\,\bar{K}(X,X) \end{bmatrix}, \end{aligned}$$
(14)

with \(L \in \mathbb {R}^{s \times s}\) is a diagonal matrix whose elements \(L_{ii} \in \{-1,1\}\) (a phase), \(R=[\varvec{r}_1,\dots ,\varvec{r}_s]^T \in \mathbb {R}^{s \times p}\) is a vector of s pseudo-points and \(\bar{K}(\varvec{x},\varvec{x}')=\frac{1}{\sigma ^2}K(\varvec{x},\varvec{x}')\) (for stationary kernels) with \(K(\varvec{x},\varvec{x}')\) being the kernel function and \(\sigma ^2\) the variance parameter of the kernel, e.g., for the RBF kernel

$$\begin{aligned} K(\varvec{x},\varvec{x}') := \sigma ^2 \exp \left( -\frac{\Vert {\varvec{x}} -{\varvec{x}'} \Vert ^{2}}{2\ell ^{2}}\right) . \end{aligned}$$

It can easily be proven that \(M>0\) and, therefore, (2) holds. We select the parameters of the kernel \(\sigma ,\ell\), the locations \(\varvec{r}_i\) of the pseudo-points and the phase diagonal matrix L by maximizing the marginal likelihood. In particular we exploit the lower bound (16) explained in Sect. 5.

Similarly to the inducing points in the sparse approximation of GPs (Quiñonero-Candela and Rasmussen 2005; Snelson and Ghahramani 2006; Titsias 2009; Bauer et al. 2016), the points \(\varvec{r}_i\) can be viewed as a set of s latent variables. However, their rôle is completely different from that of the inducing points, they allow us to locally modulate the skewness of SkewGP. Figure 3 shows latent functions (in gray, second column) drawn from a SkewGP with latent dimension 2 and the result of squashing these sample functions through the probit logistic function (first column). In all cases, we have considered \(\xi (\varvec{x})=0\) and a RBF kernel with \(\ell =0.3\) and \(\sigma ^2=1\). The values of the other parameters of the SkewGP are reported at the top of the plots in the first column. The green line is the mean function and the red dots represent the location of the \(s=2\) latent pseudo-points. For large positive values of \(\gamma _1, \gamma _2\), SkewGP is equivalent to a GP (plots (a1)–(a2)). At the decreasing of \(\gamma _i\), \(i=1,2\) (plots (b1)–(b2)), the mean shifts up and the mass of the distribution is concentrated on the top of the figure. By changing the phase (the sign of) \(L_{22}\) (plots (c1)–(c2)), the mean and the mass of the distribution shift down at the location of the second pseudo-observation \(r_2\). We can magnifying this effect by decreasing both \(\gamma _i\) (plots (d1)–(d2)). It is also possible to introduce skewness without changing the mean (plots (e1)–(e2)). In this latter case, \(r_1=r_2\) and the mass of the distribution is shifted up.

Fig. 3
figure 3

The second column shows random latent functions (in gray) drawn from a SkewGP with latent dimension 2 and the first column shows the result of squashing these sample functions through the probit logistic function. The values of the parameters of the SkewGP are reported at the top of the plots in the first column. The green line is the mean function and the red dots represent the location of the \(s=2\) latent pseudo-points

5 Computational complexity

Corollaries 1 and 2 provide two straightforward ways to compute the marginal likelihood and the predictive posterior probability however Eqs. (12) and (13) both require the accurate computation of \(\varPhi _{s+n}\). Quasi-randomized Monte Carlo methods (Genz 1992; Genz and Bretz 2009; Botev 2017) allows calculation of \(\varPhi _{s+n}\) for small n (few hundreds observations). Therefore, these procedures are not in general suitable for medium and large n [apart from special cases Phinikettos and Gandy (2011), Genton et al. (2018), Azzimonti and Ginsbourger (2018)]. We overcome this issue with an effective use of sampling for the predictive posterior and a mini-batch approach to marginal likelihood.

5.1 Posterior predictive distribution

In order to compute the predictive distribution we generate samples from the posterior distribution at training points and then exploit the closure properties of the SUN distribution to obtain samples at test points. The following result from Azzalini (2013) allows us to draw independent samples from the posterior in (7):

$$\begin{aligned} f(X)&\sim \tilde{\xi }+ D_\varOmega \left( U_0+\tilde{\varDelta }\tilde{\varGamma }^{-1}U_1\right) ,\nonumber \\ U_0&\sim \mathcal {N}(0;\bar{\varOmega }-\tilde{\varDelta }\tilde{\varGamma }^{-1}\tilde{\varDelta }^T),\quad U_1 \sim \mathcal {T}_{\tilde{\varvec{\gamma }}}(0;\tilde{\varGamma }), \end{aligned}$$
(15)

where \(\mathcal {T}_{\tilde{\varvec{\gamma }}}(0;\tilde{\varGamma })\) is the pdf of a multivariate Gaussian distribution truncated component-wise below \(-\tilde{\varvec{\gamma }}\). Equation (15) is a consequence of the additive representation of skew-normal distributions, see Azzalini (2013, Ch. 7.1.2 and 5.1.3) for more details. Note that sampling \(U_0\) can be achieved with standard methods, however using standard rejection sampling for the variable \(U_1\) would incur in exponentially growing sampling time as the dimension increases. A commonly used sampling technique for this type of problems is Elliptical Slice Sampling (ess) (Murray et al. 2010) which is a Markov Chain Monte Carlo algorithm that performs inference in models with Gaussian priors. This method looks for acceptable samples along elliptical slices and by doing so drastically reduces the number of rejected samples. Recently, Gessner et al. (2019) proposed an extension of ess, called linear elliptical slice sampling (lin-ess), for multivariate Gaussian distribution truncated on a region defined by linear constraints. In particular, this approach analytically derives the acceptable regions on the elliptical slices used in ess and thus guarantees rejection-free sampling. This leads to a large speed up over ess, especially in high dimensions.

Given posterior samples at the training points it is possible to compute the predictive posterior at a new test points \(\varvec{x}^*\) thanks to the following result.

Theorem 2

The posterior samples of the latent function computed at the test point \(\varvec{x}^*\) can be obtained by sampling \(f(\varvec{x}^*)\) from:

$$\begin{aligned} f(\varvec{x}^*)&\sim SUN_{1,s}({\xi }_{*},{\varOmega }_{*},{\varDelta }_{*},{\varvec{\gamma }}_{*},{\varGamma }_{*}),\\ {\xi }_{*}&=\xi (\varvec{x}^*)+K(\varvec{x}^*,X)K(X,X)^{-1}(f(X)-\xi (X)),\\ {\varOmega }_{*}&= K(\varvec{x}^*,\varvec{x}^*)-K(\varvec{x}^*,X)K(X,X)^{-1}K(X,\varvec{x}^*),\\ {\varDelta _{*}}&=\varDelta (\varvec{x}^*) -\bar{K}(\varvec{x}^*,X)\bar{K}(X,X)^{-1}\varDelta (X),\\ {\varvec{\gamma }_{*}}&=\varvec{\gamma }+\varDelta (X)^T \bar{K}(X,X)^{-1}D_\varOmega (X,X)^{-1}(f(X)-\xi (X)), \\ {\varGamma _{*}}&=\varGamma -\varDelta (X)^T\bar{K}^{-1}\varDelta (X), \end{aligned}$$

with f(X) sampled from the posterior \(\text {SUN}_{n,s+n}(\tilde{\xi },\tilde{\varOmega },\tilde{\varDelta },\tilde{\varvec{\gamma }},\tilde{\varGamma })\) in Theorem 1, K is a kernel that defines the matrices \(\varGamma ,\varDelta , \varOmega\) as in eq. (14) and where

$$\begin{aligned} \begin{bmatrix} \bar{K}(X,X) &{} \bar{K}(X,\varvec{x}^*)\\ \bar{K}(\varvec{x}^*,X) &{} \bar{K}(\varvec{x}^*,\varvec{x}^*) \end{bmatrix} :=D_\varOmega ^{-1}\begin{bmatrix} K(X,X) &{} K(X,\varvec{x}^*)\\ K(\varvec{x}^*,X) &{} K(\varvec{x}^*,\varvec{x}^*) \end{bmatrix}D_\varOmega ^{-1} \end{aligned}$$

and \(D_\varOmega =\text {diag}[D_\varOmega (X),D_\varOmega (\varvec{x}^*)])\)is a diagonal matrix containing the square root of the diagonal elements of the inner matrix in the r.h.s..

Observe that the computation of the predictive posterior requires the inversion of a \(n \times n\) matrix (\(\bar{\varOmega }_{11}\)), which has complexity \(O(n^3)\) with storage demands of \(O(n^2)\). SkewGPs then have a bottleneck in computational complexity similar to that of GPs. Moreover, note that sampling from \(SUN_{1,s}\) is extremely efficient when the latent dimension s is small (in the experiments we use \(s=2\)).

5.2 Marginal likelihood

As discussed in the previous section, in practical application of SkewGP, the (hyper-)parameters of the scale function \(\varOmega (\varvec{x},\varvec{x}')\), of the skewness vector function \(\varDelta (\varvec{x},\varvec{x}') \in \mathbb {R}^s\) and the parameters \(\varvec{\gamma }\in \mathbb {R}^s,\varGamma \in \mathbb {R}^{s \times s}\) have to be selected. As for GPs, we use Bayesian model selection to consistently set such parameters. This requires the maximization of the marginal likelihood with respect to these parameters and, therefore, it is essential to provide a fast and accurate way to evaluate the marginal likelihood. In this paper, we propose a simple approximation of the marginal likelihood that allows us to efficiently compute a lower bound.

Proposition 2

Consider the marginal likelihood \(p(\mathcal {D}|\mathcal {M})\) in Corollary 1, then it holds

$$\begin{aligned} p(\mathcal {D}|\mathcal {M}) = \frac{\varPhi _{s+n}(\tilde{\varvec{\gamma }};\,\tilde{\varGamma })}{\varPhi _{s}(\varvec{\gamma };\,\varGamma )}\ge \frac{ \sum _{i=1}^{b} \varPhi _{s+|B_i|}(\tilde{\varvec{\gamma }}_{B_i};\,\tilde{\varGamma }_{B_i})-(b-1)}{\varPhi _{s}(\varvec{\gamma };\,\varGamma )}, \end{aligned}$$
(16)

where \(B_1,\dots ,B_b\) denotes a partition of the training dataset into b random disjoint subsets, \(|B_i|\) denotes the number of observations in the ith element of the partition, \(\tilde{\varvec{\gamma }}_{B_i},\,\tilde{\varGamma }_{B_i}\) are the parameters of the posterior computed using only the subset \(B_i\) of the data.

If the batch-size is low (in the experiments we have used \(|B_i|=30\)), then we can efficiently compute each term \(\varPhi _{s+|B_i|}(\tilde{\varvec{\gamma }}_{B_i};\,\tilde{\varGamma }_{B_i})\) by using a quasi-randomised Monte Carlo method. We can then optimize the hyper-parameter of SkewGP by maximizing the lower bound in (16).

5.3 Computational load and parallelization

To evaluate the computational load, we have generated artificial classification data using a probit likelihood model and drawing the latent function \(f(X)=[f(\varvec{x}_1),\dots ,f(\varvec{x}_n)]\), with \(\varvec{x}_i \sim N(0,1)\), from a GP with zero mean and radial basis function kernel (lengthscale 0.5 and variance 2). We have then computed the full posterior latent function from Theorem 1, that is a SkewGP posterior. Figure 4 compares the CPU time for sampling 1000 instances of f(X) from the SkewGP posterior as a function of n for lin-ess versus a standard Elliptical Slice Sampler (ess) (5000 burn in).Footnote 2 It can be observed the computational advantage of lin-ess with respect to ess.

Fig. 4
figure 4

Computational time for sampling from the posterior of SkewGP0 using ess versus lin-ess on a standard laptop

The average CPU time required to compute \(\varPhi _{s+|B_i|}(\tilde{\varvec{\gamma }}_{B_i};\,\tilde{\varGamma }_{B_i})\) with \(|B_i|=30\), using the randomized lattice routine with 5000 points (Genz 1992), is 0.5 seconds on a standard laptop. Since the above method is randomized, we use the simultaneous perturbation stochastic approximation algorithm (Spall 1998) for optimizing the maximum lower bound (16).Footnote 3

Finally notice that, both the computation of the lower bound of the marginal likelihood and the sampling from the posterior via lin-ess are highly parallelizable. In fact, each term \(\varPhi _{s+|B_i|}(\tilde{\varvec{\gamma }}_{B_i};\,\tilde{\varGamma }_{B_i})\) can be computed independently as well as each sample in lin-ess can be sample independently (because lin-ess is rejection-free and, therefore, no “burn in” is necessary).

6 Properties of the posterior

In the above sections, we have shown how to compute the posterior distribution of a SkewGP process when the likelihood is a probit model. The full conjugacy of the model allows us to prove that the posterior is again a SkewGP process. This section provides more details on the properties of the posterior and compares it with two approximations. For GP classification, there are two main alternative approximation schemes for finding a Gaussian approximation to the posterior: the Laplace’s method and the Expectation-Propagation (EP) method, see, e.g. Rasmussen and Williams (2006) chapter 3.

Figure 5 provides a one-dimensional illustration using a synthetic classification problem with 50 observations and scalar inputs taken from (Kuss and Rasmussen 2005). Figure 5a shows the dataset and the predictive posterior probability for the Laplace and EP approximations. Moreover, by using a SkewGP prior with latent dimension \(s=0\) (that coincides with a GP prior), we have computed the exact SkewGP predictive posterior probability. Therefore, all three methods have the same prior: a GP with zero mean and RBF covariance function (the lengthscale and variance of the kernel are the same for all the three methods and have been set equal to the values that maximise the Laplace’s approximation to the marginal likelihood). Figure 5c shows the posterior mean latent function and corresponding 95% credible intervals. It is evident that the true posterior (SkewGP) of the latent function is skew (see for instance for \(x\in [0,2]\) and the slice plot in Fig. 5b)). Laplace’s approximation peaks at the posterior mode, but places too much mass over positive values of f. The EP approximation aims to match the first two moments of the posterior and, therefore, usually obtains a better coverage of the posterior mass. That is why EP is usually the method of choice for approximate inference in GP classification.

Figure 6 shows the true posterior and the two approximations for the same dataset, but now the lengthscale and variance of the kernel are the optimal values for the three methods. It is evident that the skewness of the posterior provides a better model fit to the data.

Figure 7 shows the posteriors corresponding to a prior SkewGP process with latent-dimension \(s=2\). The red dot denotes the optimal location of the pseudo-point \(r_1\), while \(r_2=13.5\) (their initial location were 5.8 and, respectively, 6). The additional degrees of freedom of the SkewGP prior process gave a much more satisfactory answer than that obtained from a GP prior model. By comparing Figs. 6 and 7, it can be noticed that the skew-point allows us to locally modulate the skewness. Moreover, the additional degrees of freedom do not lead to overfitting, even with small data, as highlighted by the optimized location of \(r_2\) (far away) that has not effect on the skewness of the posterior SkewGP.

Fig. 5
figure 5

Plot a shows the training dataset and compares the true posterior mean probability of class 1, that is a SkewGP, versus the approximations computed via Laplace’s method and EP. Plot c shows the posterior mean latent function and corresponding 95% credible intervals for the three methods and Plot b reports the density-plot of the latent function posterior prediction at \(x^*=0.42\)

Fig. 6
figure 6

Plot a shows the training dataset and compares the true posterior mean probability of class 1, that is a SkewGP, versus the approximations computed via Laplace’s method and EP. Plot b shows the posterior mean latent function and corresponding 95% credible intervals for the three methods

Fig. 7
figure 7

Plot a shows the training dataset and compares the true posterior mean probability of class 1, that is a SkewGP2, versus the approximations computed via Laplace’s method and EP. Plot b shows the posterior mean latent function and corresponding 95% credible intervals for the three methods. The red dot denotes the optimal location of the pseudo-point \(r_1\)

7 Results

We have evaluated the proposed SkewGP classifier on a number of benchmark classification datasets and compared its classification accuracy with the accuracy of a Gaussian process classifier that uses either Laplace’s method (GP-L) or Expectation Propagation (GP-EP) for approximate Bayesian inference. For GP-L and GP-EP, we have used the implementation available in GPy (GPy, since 2012).

7.1 Penn machine learning benchmarks

From the Penn Machine Learning Benchmarks (Olson et al. 2017), we have selected 124 datasets (number of features up to 500). Since this pool includes non-binary class datasets, we have defined a binary sub-problem by considering the first (class 0) and second (class 1) class. The resulting binarised subset of datasets includes datasets with number of instances between 100 and 7400. We have scaled the inputs to zero mean and unit standard deviation and used, as performance measure, the average information in bits of the predictions about the test targets (Kuss and Rasmussen 2005):

$$\begin{aligned} I(p_i^*,y_i)=\frac{y_i+1}{2}\log _2(p_i^*)+\frac{1-y_i}{2}\log _2(1-p_i^*)+1, \end{aligned}$$

where \(p_i^*\) is the predicted probability for class 1. This score equals 1 bit if the true label is predicted with absolute certainty, 0 bits for random guessing and takes negative values if the prediction gives higher probability to the wrong class. We have assessed the above performance measure for the three classifiers by using 5-fold cross-validation.

While we could use any kernel for GP-L, GP-EP and SkewGP, in this experiment we have chosen the RBF kernel with a lengthscale for each dimension. Figure 8 contrasts GP-L and GP-EP with SkewGP0 (SkewGP with \(s=0\)) and SkewGP2 (SkewGP with \(s=2\)). We selected \(s=2\) because we decided to use the same dimension for all datasets and, since there are several datasets where the ratio between the number of features and the number of instances is high, a latent dimension \(s>2\) leads to a number of parameters that exceeds the number of instances affecting the convergence of the maximization of the marginal likelihood. The proposed SkewGP2 and SkewGP0 outperform the other two models for most data sets. The average information score of SkewGP2 is 0.573 (average accuracy 0.904), SkewGP0 is 0.557 (acc. 0.882), GP-EP is 0.542 (acc. 0.859) and GP-L is 0.512 (acc. 0.863).

This claim is supported by a statistical analysis. We have compared the three classifiers using the (nonparametric) Bayesian signed-rank test (Benavoli et al. 2014, 2017). This test declares two classifiers practically equivalent when the difference of average information is less than 0.01 (1%). The interval \([-0.01,0.01]\) thus defines a region of practical equivalence (rope) for classifiers. The test returns the posterior probability of the vector \([p(Cl_1 > Cl_2), p(Cl_1 \approx Cl_2), p(Cl_1 < Cl_2)]\) and, therefore, this posterior can be visualised in the probability simplex (Fig. 9). For the comparison GP-L versus GP-EP, as expected it can seen that GP-EP is better than GP-L.Footnote 4 Conversely, for GP-EP versus SkewGP0, 100% of the posterior mass is in the region in favor of SkewGP0, which is the region at the right bottom of the triangle. This confirms that SkewGP0 is practically significantly better than GP-L and GP-EP. The comparison SkewGP2 versus SkewGP0 shows that SkewGP2 has surely an average information score that is not worse than that of SkewGP0 and better with probability about 0.76.

The difference between GP-L and GP-EP, and SkewGP is that the posterior of SkewGP can be skewed. Therefore, we expect SkewGP to outperform GP-L and GP-EP on the datasets for which the posterior is far from Normal (e.g., highly skewed). To verify that we have computed the sample skewness statistics (SS) for each test input \(\mathbf {x}_i^*\):

$$\begin{aligned} SS(\mathbf {x}_i^*)={\frac{{\text {E}} \left[ (f(\mathbf {x}_i^*)-\mu )^{3}\right] }{({\text {E}} \left[ (f(\mathbf {x}_i^*)-\mu )^{2}\right] )^{3/2}}} \end{aligned}$$

with \(\mu ={\text {E}} \left[ (f(\mathbf {x}_i^*)\right]\) and the expectation \({\text {E}}[\cdot ]\) can be approximated using the posterior samples drawn as in Theorem 2. Note that \(SS(\mathbf {x}_i^*)=0\) for symmetric distributions. Figure 10(left) shows, for each of the 124 datasets, the difference between the average information score of SkewGP0 and GP-EP in the y-axis, and \(\max _{\mathbf {x}_i^*} SS(\mathbf {x}_i^*)\) for SkewGP0 in the x-axis. We used a regression tree (green line) to detect structural changes in the mean of these data points. It is evident that, for large values of the maximum skewness statistics, SkewGP0 outperforms GP-EP (the average difference is positive). Figure 10(right) reports a similar plot for SkewGP2 and the difference is even more evident. This confirms that SkewGP on average outperforms GP-EP in those datasets where the posterior is skewed and has a similar performance otherwise.

Fig. 8
figure 8

Average information score on 124 datasets of the Penn Machnine Learning Benchmark dataset

Fig. 9
figure 9

Bayesian Wilcoxon signed-rank test

Fig. 10
figure 10

Maximum skewness statistics versus difference in average information score. For visualisation purpose only, we bounded the y-axis to \([-0.11,0.16]\)

7.2 Image classification

We have also considered an image classification task: Fashion MNIST dataset (each image is \(28\times 28 =784\) pixels and there are 10 classes: 0 T-shirt/top, 1 Trouser, 2 Pullover, 3 Dress, 4 Coat, 5 Sandal, 6 Shirt, 7 Sneaker, 8 Bag, 9 Ankle boot). We randomly pooled 10000 images from the dataset and divided them into two sets, with 5000 cases for training and 5000 for testing. For each one of the 10 classes, we have defined a binary classification sub-problem by considering one class against all the other classes. We have compared GP-EP and SkewGP2, that is a SkewGP with latent dimension \(s=2\) (for the same reason outlined in the previous section). We have initialised \(r_i\) by taking 2 random samples from the training data. We have also considered two different kernels: RBF and the Neural Network kernel (Williams 1998). Table 1 reports the accuracy for each of the 10 binary classification sub-problems. For the RBF kernel, it can be noticed that SkewGP2 outperforms GP-EP in all sub-problems. For the NN kernel, the differences between the two models are less substantial (due to the higher performance of the NN kernel on this dataset) but in any case in favor of SkewGP2. We have also reported, for both the models, the computational timeFootnote 5 (in minutes) needed to optimize the hyperparameters, to compute the posterior and to compute the predictions for all instances in the test set. This shows that SkewGP2 is also faster than GP-EP.Footnote 6 The last row reports the accuracy on the original multi-class classification problem obtained by using the one-vs-rest heuristic, with the only goal of showing that the more accurate estimate of the probability by SkewGP leads also to an increase in accuracy for one-vs-rest. A multi-class Laplace’s approximation for GP classification was developed by Williams and Barber (1998) and other implementations are for instance discussed by Hernández-Lobato et al. (2011) and Chai (2012), we plan to address multi-class classification in future work.

Table 1 Image classification

Our goal is assessing the accuracy but also the quality of the probabilistic predictions. Figure 11, plot (a1), shows, for the RBF kernel case and for each instance in the test set of the binary sub-problem relative to class 8 versus rest, the value of of the mean predicted probability of class rest for SkewGP2 (x-axis) and GP-EP (y-axis). Each instance is represented as a blue point. The red points highlight the instances that were misclassified by GP-EP. Figure (a2) shows the same plot, but the red points are now the instances that were misclassified by SkewGP2. By comparing (a1) and (a2), it is clear that SkewGP2 provides a higher quality of the probabilistic predictions.

SkewGP2 also returns a better estimate of its own uncertainty, as shown in plots (b1) versus (b2). For each instance in the test set and for each sample from the posterior, we have computed the predicted class (the class that has probability greater than 0.5). For each test instance, we have then computed the standard deviation of all these predictions and used it to color the scatter plot of the mean predicted probability. In this way, we have a visual representation of first order (mean predicted probability) and second order (standard deviation of the predictions) uncertainty. Fig. 11(b1) is relative to GP-EP and shows that GP-EP confidence is low only for the instances whose mean predicted probability is close to 0.5. This is not reflected in the value of the mean predicted probability for the misclassified instances (compare plot (a1) and (b1)). We have also computed the histogram of the standard deviation of the predictions for those instances that were misclassified by GP-EP in Figure (c1). Note that, the peak of the histogram corresponds to very low standard deviation, that means GP-EP has misclassified instances that have low second order uncertainty. This implies that the model is overestimating its confidence. Conversely, the second order uncertainty of SkewGP2 is clearly consistent, see plot (a2) and (b2), and in particular the histogram in (c2)—the peak is in correspondence of high values of the standard deviation of the predictions. In other words, SkewGP2 has mainly misclassified instances with high second order uncertainty, that is what we expect from a calibrated probabilistic model. We have reported additional examples of the better calibration of SkewGP2 for the MNIST and German-road sign dataset in "Appendices 1 and 2".

Fig. 11
figure 11

Fashion MNIST dataset

8 Conclusions

We have introduced the Skew Gaussian process (SkewGP) as an alternative to Gaussian processes for classification. We have shown that SkewGP and the probit likelihood are conjugate and provided marginals and closed form conditionals. We have also shown that SkewGP contains the GP as a special case and, therefore, SkewGPs inherit all good properties of GPs and increase their flexibility. The SkewGP prior was applied in classification showing improved performance over GPs (Laplace’s method and Expectation Propagation approximations).

As future work, we plan to study other more native ways to parametrize the skewness matrix \(\varDelta\) that do not rely on an underlying kernel. Moreover, we plan to investigate the possibility of using inducing points, as for sparse GPs, to reduce the computational load for matrix operations (complexity \(O(n^3)\) with storage demands of \(O(n^2)\)) as well as deriving the posterior for the multi-class classification problem.