1 Introduction

Machine learning is essentially concerned with extracting models from data, often (though not exclusively) using them for the purpose of prediction. As such, it is inseparably connected with uncertainty. Indeed, learning in the sense of generalizing beyond the data seen so far is necessarily based on a process of induction, i.e., replacing specific observations by general models of the data-generating process. Such models are never provably correct but only hypothetical and therefore uncertain, and the same holds true for the predictions produced by a model. In addition to the uncertainty inherent in inductive inference, other sources of uncertainty exist, including incorrect model assumptions and noisy or imprecise data.

Needless to say, a trustworthy representation of uncertainty is desirable and should be considered as a key feature of any machine learning method, all the more in safety-critical application domains such as medicine (Yang et al. 2009; Lambrou et al. 2011) or socio-technical systems (Varshney 2016; Varshney and Alemzadeh 2016). Besides, uncertainty is also a major concept within machine learning methodology itself; for example, the principle of uncertainty reduction plays a key role in settings such as active learning (Aggarwal et al. 2014; Nguyen et al. 2019), or in concrete learning algorithms such as decision tree induction (Mitchell 1980).

Traditionally, uncertainty is modeled in a probabilistic way, and indeed, in fields like statistics and machine learning, probability theory has always been perceived as the ultimate tool for uncertainty handling. Without questioning the probabilistic approach in general, one may argue that conventional approaches to probabilistic modeling, which are essentially based on capturing knowledge in terms of a single probability distribution, fail to distinguish two inherently different sources of uncertainty, which are often referred to as aleatoric and epistemic uncertainty (Hora 1996; Der Kiureghian and Ditlevsen 2009). Roughly speaking, aleatoric (aka statistical) uncertainty refers to the notion of randomness, that is, the variability in the outcome of an experiment which is due to inherently random effects. The prototypical example of aleatoric uncertainty is coin flipping: The data-generating process in this type of experiment has a stochastic component that cannot be reduced by any additional source of information (except Laplace’s demon). Consequently, even the best model of this process will only be able to provide probabilities for the two possible outcomes, heads and tails, but no definite answer. As opposed to this, epistemic (aka systematic) uncertainty refers to uncertainty caused by a lack of knowledge (about the best model). In other words, it refers to the ignorance (cf. Sects. 3.3 and Appendix A.2) of the agent or decision maker, and hence to the epistemic state of the agent instead of any underlying random phenomenon. As opposed to uncertainty caused by randomness, uncertainty caused by ignorance can in principle be reduced on the basis of additional information. For example, what does the word “kichwa” mean in the Swahili language, head or tail? The possible answers are the same as in coin flipping, and one might be equally uncertain about which one is correct. Yet, the nature of uncertainty is different, as one could easily get rid of it. In other words, epistemic uncertainty refers to the reducible part of the (total) uncertainty, whereas aleatoric uncertainty refers to the irreducible part.

Fig. 1
figure 1

Predictions by EfficientNet (Tan and Le 2019) on test images from ImageNet: For the left image, the neural network predicts “typewriter keyboard” with certainty 83.14 %, for the right image “stone wall” with certainty 87.63 %

In machine learning, where the agent is a learning algorithm, the two sources of uncertainty are usually not distinguished. In some cases, such a distinction may indeed appear unnecessary. For example, if an agent is forced to make a decision or prediction, the source of its uncertainty—aleatoric or epistemic—might actually be irrelevant. This argument is often put forward by Bayesians in favor of a purely probabilistic approach (and classical Bayesian decision theory). One should note, however, that this scenario does not always apply. Instead, the ultimate decision can often be refused or delayed, like in classification with a reject option (Chow 1970; Hellman 1970), or actions can be taken that are specifically directed at reducing uncertainty, like in active learning (Aggarwal et al. 2014).

Motivated by such scenarios, and advocating a trustworthy representation of uncertainty in machine learning, Senge et al. (2014) explicitly refer to the distinction between aleatoric and epistemic uncertainty. They propose a quantification of these uncertainties and show the usefulness of their approach in the context of medical decision making. A very similar motivation is given by Kull and Flach (2014) in the context of their work on reliability maps. They distinguish between a predicted probability score and the uncertainty in that prediction, and illustrate this distinction with an example from weather forecastingFootnote 1: “... a weather forecaster can be very certain that the chance of rain is 50 %; or her best estimate at 20 % might be very uncertain due to lack of data.” Roughly, the 50 % chance corresponds to what one may understand as aleatoric uncertainty, whereas the uncertainty in the 20 % estimate is akin to the notion of epistemic uncertainty. On the more practical side, Varshney and Alemzadeh (2016) give an example of a recent accident of a self-driving car, which led to the death of the driver (for the first time after 130 million miles of testing). They explain the car’s failure by the extremely rare circumstances, and emphasize “the importance of epistemic uncertainty or “uncertainty on uncertainty” in these AI-assisted systems”.

More recently, a distinction between aleatoric and epistemic uncertainty has also been advocated in the literature on deep learning (Kendall and Gal 2017), where the limited awareness of neural networks of their own competence has been demonstrated quite nicely. For example, experiments on image classification have shown that a trained model does often fail on specific instances, despite being very confident in its prediction (cf. Fig. 1). Moreover, such models are often lacking robustness and can easily be fooled by “adversarial examples” (Papernot and McDaniel 2018): Drastic changes of a prediction may already be provoked by minor, actually unimportant changes of an object. This problem has not only been observed for images but also for other types of data, such as natural language text (cf. Fig. 2 for an example).

Fig. 2
figure 2

Adversarial example (right) misclassified by a machine learning model trained on textual data: Changing only a single—and apparently not very important—word (highlighted in bold font) is enough to turn the correct prediction “negative sentiment” into the incorrect prediction “positive sentiment” (Sato et al. 2018)

This paper provides an overview of machine learning methods for handling uncertainty, with a specific focus on the distinction between aleatoric and epistemic uncertainty in the common setting of supervised learning. In the next section, we provide a short introduction to this setting and propose a distinction between different sources of uncertainty. Section 3 elaborates on modeling epistemic uncertainty. More specifically, it considers set-based modeling and modeling with (probability) distributions as two principled approaches to capturing the learner’s epistemic uncertainty about its main target, that is, the (Bayes) optimal model within the hypothesis space. Concrete approaches for modeling and handling uncertainty in machine learning are then discussed in Sect. 4, prior to concluding the paper in Sect. 5. In addition, Appendix A provides some general background on uncertainty modeling (independent of applications in machine learning), specifically focusing on the distinction between set-based and distributional (probabilistic) representations and emphasizing the potential usefulness of combining them.

Table 1 Notation used throughout the paper

Table 1 summarizes some important notation. Let us note that, for the sake of readability, a somewhat simplified notation will be used for probability measures and associated distribution functions. Most of the time, these will be denoted by \(P\) and \(p\), respectively, even if they refer to different measures and distributions on different spaces (which will be clear from the arguments). For example, we will write \(p( h )\) and \(p(y \, | \,{\varvec{x}})\) for the probability (density) of hypotheses h (as elements of the hypothesis space \({\mathcal{H}}\)) and outcomes y (as elements of the output space \({\mathcal{Y}}\)), instead of using different symbols, such as \(p_{\mathcal{H}}( h )\) and \(p_{\mathcal{Y}}(y \, | \,{\varvec{x}})\).

2 Sources of uncertainty in supervised learning

Uncertainty occurs in various facets in machine learning, and different settings and learning problems will usually require a different handling from an uncertainty modeling point of view. In this paper, we focus on the standard setting of supervised learning (cf. Fig. 3), which we briefly recall in this section. Moreover, we identify different sources of (predictive) uncertainty in this setting.

2.1 Supervised learning and predictive uncertainty

In supervised learning, a learner is given access to a set of training data

$$\begin{aligned} {\mathcal{D}} {:}{=}\big \{ ({\varvec{x}}_1 , y_1 ), \ldots , ({\varvec{x}}_N , y_N ) \big \} \subset {\mathcal{X}} \times {\mathcal{Y}} , \end{aligned}$$
(1)

where \({\mathcal{X}}\) is an instance space and \({\mathcal{Y}}\) the set of outcomes that can be associated with an instance. Typically, the training examples \(({\varvec{x}}_i , y_i)\) are assumed to be independent and identically distributed (i.i.d.) according to some unknown probability measure \(P\) on \({\mathcal{X}} \times {\mathcal{Y}}\). Given a hypothesis space \({\mathcal{H}}\) (consisting of hypotheses \(h:\, {\mathcal{X}}\longrightarrow {\mathcal{Y}}\) mapping instances \({\varvec{x}}\) to outcomes y) and a loss function \(\ell : \, {\mathcal{Y}} \times {\mathcal{Y}} \longrightarrow \mathbb {R}\), the goal of the learner is to induce a hypothesis \(h^* \in {\mathcal{H}}\) with low risk (expected loss)

$$\begin{aligned} R(h) {:}{=}\int _{{\mathcal{X}}\times {\mathcal{Y}}} \ell ( h({\varvec{x}}) , y) \, d \, P({\varvec{x}} , y) . \end{aligned}$$
(2)

Thus, given the training data \({\mathcal{D}}\), the learner needs to “guess” a good hypothesis h. This choice is commonly guided by the empirical risk

$$\begin{aligned} R_{emp}(h) {:}{=}\frac{1}{N} \sum _{i=1}^N \ell (h({\varvec{x}}_i) , y_i) , \end{aligned}$$
(3)

i.e., the performance of a hypothesis on the training data. However, since \(R_{emp}(h)\) is only an estimation of the true risk R(h), the hypothesis (empirical risk minimizer)

$$\begin{aligned} \hat{h}{:}{=}{\mathop {\hbox {argmin}}\limits _{h \in {\mathcal{H}}}} R_{emp}(h) \end{aligned}$$
(4)

favored by the learner will normally not coincide with the true risk minimizer

$$\begin{aligned} h^* {:}{=}{\mathop {\hbox {argmin}}\limits _{h \in {\mathcal{H}}}} R(h) \, . \end{aligned}$$
(5)

Correspondingly, there remains uncertainty regarding \(h^*\) as well as the approximation quality of \(\hat{h}\) (in the sense of its proximity to \(h^*\)) and its true risk \(R(\hat{h})\).

Fig. 3
figure 3

The basic setting of supervised learning: A hypothesis \(\hat{h}\in {\mathcal{H}}\) is induced from the training data \({\mathcal{D}}\) and used to produce predictions for new query instances \({\varvec{x}} \in {\mathcal{X}}\)

Eventually, one is often interested in predictive uncertainty, i.e., the uncertainty related to the prediction \(\hat{y}_{q}\) for a concrete query instance \({\varvec{x}}_{q} \in {\mathcal{X}}\). In other words, given a partial observation \(({\varvec{x}}_{q} , \cdot )\), we are wondering what can be said about the missing outcome, especially about the uncertainty related to a prediction of that outcome. Indeed, estimating and quantifying uncertainty in a transductive way, in the sense of tailoring it for individual instances, is arguably important and practically more relevant than a kind of average accuracy or confidence, which is often reported in machine learning. In medical diagnosis, for example, a patient will be interested in the reliability of a test result in her particular case, not in the reliability of the test on average. This view is also expressed, for example, by Kull and Flach (2014): “Being able to assess the reliability of a probability score for each instance is much more powerful than assigning an aggregate reliability score [...] independent of the instance to be classified.”

Emphasizing the transductive nature of the learning process, the learning task could also be formalized as a problem of predictive inference as follows: Given a set (or sequence) of data points \((X_1, Y_1), \ldots , (X_N, Y_N)\) and a query point \(X_{N+1}\), what is the associated outcome \(Y_{N+1}\)? Here, the data points are considered as (realizations of) random variables,Footnote 2 which are commonly assumed to be independent and identically distributed (i.i.d.). A prediction could be given in the form of a point prediction \(\hat{Y}_{N+1} \in {\mathcal{Y}}\), but also (and perhaps preferably) in the form of a predictive set \(\hat{C}(X_{N+1}) \subseteq {\mathcal{Y}}\) that is likely to cover the true outcome, for example an interval in the case of regression (cf. Sects. 4.8 and 4.9). Regarding the aspect of uncertainty, different properties of \(\hat{C}(X_{N+1})\) might then be of interest. For example, coming back to the discussion from the previous paragraph, a basic distinction can be made between a statistical guarantee for marginal coverage, namely \(P( Y_{N+1} \in \hat{C}(X_{N+1}) ) \ge 1 - \delta\) for a (small) threshold \(\delta > 0\), and conditional coverage, namely

$$\begin{aligned} P\Big ( Y_{N+1} \in \hat{C}(X_{N+1}) \, | \,X_{N+1} = {\varvec{x}} \Big ) \ge 1 - \delta \quad \text { for (almost) all } {\varvec{x}} \in {\mathcal{X}}\, . \end{aligned}$$

Roughly speaking, in the case of marginal coverage, one averages over both \(X_{N+1}\) and \(Y_{N+1}\) (i.e., the probability \(P\) refers to a joint measure on \({\mathcal{X}}\times {\mathcal{Y}}\)), while in the case of conditional coverage, \(X_{N+1}\) is fixed and the average in taken over \(Y_{N+1}\) only (Barber et al. 2020). Note that predictive inference as defined here does not necessarily require the induction of a hypothesis \(\hat{h}\) in the form of a (global) map \({\mathcal{X}}\longrightarrow {\mathcal{Y}}\), i.e., the solution of an induction problem. While it is true that transductive inference can be realized via inductive inference (i.e., by inducing a hypothesis \(\hat{h}\) first and then producing a prediction \(\hat{Y}_{N+1} = \hat{h}(X_{N+1})\) by applying this hypothesis to the query \(X_{N+1}\)), one should keep in mind that induction is a more difficult problem than transduction (Vapnik 1998).

2.2 Sources of uncertainty

Fig. 4
figure 4

Different types of uncertainties related to different types of discrepancies and approximation errors: \(f^*\) is the pointwise Bayes predictor, \(h^*\) is the best predictor within the hypothesis space, and \(\hat{h}\) the predictor produced by the learning algorithm

As the prediction \(\hat{y}_{q}\) constitutes the end of a process that consists of different learning and approximation steps, all errors and uncertainties related to these steps may also contribute to the uncertainty about \(\hat{y}_{q}\) (cf. Fig. 4):

  • Since the dependency between \({\mathcal{X}}\) and \({\mathcal{Y}}\) is typically non-deterministic, the description of a new prediction problem in the form of an instance \({\varvec{x}}_{q}\) gives rise to a conditional probability distribution

    $$\begin{aligned} p( y \, | \,{\varvec{x}}_{q}) = \frac{p({\varvec{x}}_{q} , y)}{p({\varvec{x}}_q)} \end{aligned}$$
    (6)

    on \({\mathcal{Y}}\), but it does normally not identify a single outcome y in a unique way. Thus, even given full information in the form of the measure \(P\) (and its density \(p\)), uncertainty about the actual outcome y remains. This uncertainty is of an aleatoric nature. In some cases, the distribution (6) itself (called the predictive posterior distribution in Bayesian inference) might be delivered as a prediction. Yet, when being forced to commit to point estimates, the best predictions (in the sense of minimizing the expected loss) are prescribed by the pointwise Bayes predictor \(f^*\), which is defined by

    $$\begin{aligned} f^*({\varvec{x}}) {:}{=}{\mathop {\hbox{argmin}}\limits _{\hat{y}\in {\mathcal{Y}}}} \int _{\mathcal{Y}}\ell (y , \hat{y}) \, d P( y \, | \,{\varvec{x}} ) \end{aligned}$$
    (7)

    for each \({\varvec{x}} \in {\mathcal{X}}\).

  • The Bayes predictor (5) does not necessarily coincide with the pointwise Bayes predictor (7). This discrepancy between \(h^*\) and \(f^*\) is connected to the uncertainty regarding the right type of model to be fit, and hence the choice of the hypothesis space \({\mathcal{H}}\) (which is part of what is called “background knowledge” in Fig. 3). We shall refer to this uncertainty as model uncertainty.Footnote 3 Thus, due to this uncertainty, one can not guarantee that \(h^*({\varvec{x}}) = f^*({\varvec{x}})\), or, in case the hypothesis \(h^*\) (e.g., a probabilistic classifier) delivers probabilistic predictions \(p(y \, | \,{\varvec{x}}, h^*)\) instead of point predictions, that \(p( \cdot \, | \,{\varvec{x}}, h^*) = p( \cdot \, | \,{\varvec{x}})\).

  • The hypothesis \(\hat{h}\) produced by the learning algorithm, for example the empirical risk minimizer (4), is only an estimate of \(h^*\), and the quality of this estimate strongly depends on the quality and the amount of training data. We shall refer to the discrepancy between \(\hat{h}\) and \(h^*\), i.e., the uncertainty about how well the former approximates the latter, as approximation uncertainty.

2.3 Reducible versus irreducible uncertainty

As already said, one way to characterize uncertainty as aleatoric or epistemic is to ask whether or not the uncertainty can be reduced through additional information: Aleatoric uncertainty refers to the irreducible part of the uncertainty, which is due to the non-deterministic nature of the sought input/output dependency, that is, to the stochastic dependency between instances \({\varvec{x}}\) and outcomes y, as expressed by the conditional probability (6). Model uncertainty and approximation uncertainty, on the other hand, are subsumed under the notion of epistemic uncertainty, that is, uncertainty due to a lack of knowledge about the perfect predictor (7), for example caused by uncertainty about the parameters of a model. In principle, this uncertainty can be reduced.

This characterization, while evident at first sight, may appear somewhat blurry upon closer inspection. What does “reducible” actually mean? An obvious source of additional information is the training data \({\mathcal{D}}\): The learner’s uncertainty can be reduced by observing more data, while the setting of the learning problem—the instance space \({\mathcal{X}}\), output space \({\mathcal{Y}}\), hypothesis space \({\mathcal{H}}\), joint probability \(P\) on \({\mathcal{X}} \times {\mathcal{Y}}\)—remains fixed. In practice, this is of course not always the case. Imagine, for example, that a learner can decide to extend the description of instances by additional features, which essentially means replacing the current instance space \({\mathcal{X}}\) by another space \({\mathcal{X}}'\). This change of the setting may have an influence on uncertainty. An example is shown in Fig. 5: In a low-dimensional space (here defined by a single feature \(x_1\)), two class distributions are overlapping, which causes (aleatoric) uncertainty in a certain region of the instance space. By embedding the data in a higher-dimensional space (here accomplished by adding a second feature \(x_2\)), the two classes become separable, and the uncertainty can be resolved. More generally, embedding data in a higher-dimensional space will reduce aleatoric and increase epistemic uncertainty, because fitting a model will become more difficult and require more data.

Fig. 5
figure 5

Left: The two classes are overlapping, which causes (aleatoric) uncertainty in a certain region of the instance space. Right: By adding a second feature, and hence embedding the data in a higher-dimensional space, the two classes become separable, and the uncertainty can be resolved

What this example shows is that aleatoric and epistemic uncertainty should not be seen as absolute notions. Instead, they are context-dependent in the sense of depending on the setting \(({\mathcal{X}}, {\mathcal{Y}}, {\mathcal{H}}, P)\). Changing the context will also change the sources of uncertainty: aleatoric may turn into epistemic uncertainty and vice versa. Consequently, by allowing the learner to change the setting, the distinction between these two types of uncertainty will be somewhat blurred (and their quantification will become even more difficult). This view of the distinction between aleatoric and epistemic uncertainty is also shared by Der Kiureghian and Ditlevsen (2009), who note that “these concepts only make unambiguous sense if they are defined within the confines of a model of analysis”, and that “In one model an addressed uncertainty may be aleatory, in another model it may be epistemic.”

2.4 Approximation and model uncertainty

Assuming the setting \(({\mathcal{X}}, {\mathcal{Y}}, {\mathcal{H}}, P)\) to be fixed, the learner’s lack of knowledge will essentially depend on the amount of data it has seen so far: The larger the number \(N = | {\mathcal{D}}|\) of observations, the less ignorant the learner will be when having to make a new prediction. In the limit, when \(N \rightarrow \infty\), a consistent learner will be able to identify \(h^*\) (see Fig. 6 for an illustration), i.e., it will get rid of its approximation uncertainty.

Fig. 6
figure 6

Left: Even with precise knowledge about the optimal hypothesis, the prediction at the query point (indicated by a question mark) is aleatorically uncertain, because the two classes are overlapping in that region. Right: A case of epistemic uncertainty due to a lack of knowledge about the right hypothesis, which is in turn caused by a lack of data

What is (implicitly) assumed here is a correctly specified hypothesis space \({\mathcal{H}}\), such that \(f^* \in {\mathcal{H}}\). In other words, model uncertainty is simply ignored. For obvious reasons, this uncertainty is very difficult to capture, let alone quantify. In a sense, a kind of meta-analysis would be required: Instead of expressing uncertainty about the ground-truth hypothesis h within a hypothesis space \({\mathcal{H}}\), one has to express uncertainty about which \({\mathcal{H}}\) among a set \(\mathbb {H}\) of candidate hypothesis spaces might be the right one. Practically, such kind of analysis does not appear to be feasible. On the other side, simply assuming a correctly specified hypothesis space \({\mathcal{H}}\) actually means neglecting the risk of model misspecification. To some extent, this appears to be unavoidable, however. In fact, the learning itself as well as all sorts of inference from the data are normally done under the assumption that the model is valid. Otherwise, since some assumptions are indeed always needed, it will be difficult to derive any useful conclusions.

As an aside, let us note that these assumptions, in addition to the nature of the ground truth \(f^*\), also include other (perhaps more implicit) assumptions about the setting and the data-generating process. For example, imagine that a sudden change of the distribution cannot be excluded, or a strong discrepancy between training and test data (Malinin and Gales 2018). Not only prediction but also the assessment of uncertainty would then become difficult, if not impossible. Indeed, if one cannot exclude something completely unpredictable to happen, there is hardly any way to reduce predictive uncertainty. To take a simple example, (epistemic) uncertainty about the bias of a coin can be estimated and quantified, for example in the form of a confidence interval, from an i.i.d. sequence of coin tosses. But what if the bias may change from one moment to the other (e.g., because the coin is replaced by another one), or another outcome becomes possible (e.g., “invalid” if the coin has not been tossed in agreement with a new execution rule)? While this example may appear a bit artificial, there are indeed practical classification problems in which certain classes \(y \in {\mathcal{Y}}\) may “disappear” while new classes emerge, i.e., in which \({\mathcal{Y}}\) may change in the course of time. Likewise, non-stationarity of the data-generating process (the measure \(P\)), including the possibility of drift or shift of the distribution, is a common assumption in learning on data streams (Gama 2012).

Coming back to the assumptions about the hypothesis space \({\mathcal{H}}\), the latter is actually very large for some learning methods, such as nearest neighbor classification or (deep) neural networks. Thus, the learner has a high capacity (or “universal approximation” capability) and can express hypotheses in a very flexible way. In such cases, \(h^* = f^*\) or at least \(h^* \approx f^*\) can safely be assumed. In other words, since the model assumptions are so weak, model uncertainty essentially disappears (at least when disregarding or taking for granted other assumptions as discussed above). Yet, the approximation uncertainty still remains a source of epistemic uncertainty. In fact, this uncertainty tends to be high for methods like nearest neighbors or neural networks, especially if data is sparse.

In the next section, we recall two specific though arguably natural and important approaches for capturing this uncertainty, namely version space learning and Bayesian inference. In version space learning, uncertainty about \(h^*\) is represented in terms of a set of possible candidates, whereas in Bayesian learning, this uncertainty is modeled in terms of a probability distribution on \({\mathcal{H}}\). In both cases, an explicit distinction is made between the uncertainty about \(h^*\), and how this uncertainty translates into uncertainty about the outcome for a query \({\varvec{x}}_q\).

3 Modeling approximation uncertainty: set-based versus distributional representations

Bayesian inference can be seen as the main representative of probabilistic methods and provides a coherent framework for statistical reasoning that is well-established in machine learning (and beyond). Version space learning can be seen as a “logical” (and in a sense simplified) counterpart of Bayesian inference, in which hypotheses and predictions are not assessed numerically in terms of probabilities, but only qualified (deterministically) as being possible or impossible. In spite of its limited practical usefulness, version space learning is interesting for various reasons. In particular, in light of our discussion about uncertainty, it constitutes an interesting case: By construction, version space learning is free of aleatoric uncertainty, i.e., all uncertainty is epistemic.

3.1 Version space learning

In the idealized setting of version space learning, we assume a deterministic dependency \(f^*:\, {\mathcal{X}}\longrightarrow {\mathcal{Y}}\), i.e., the distribution (6) degenerates to

$$\begin{aligned} p( y \, | \,{\varvec{x}}_{q}) = \left\{ \begin{array}{ll} 1 &{} \text { if } y = f^*({\varvec{x}}_{q}) \\ 0 &{} \text { if } y \ne f^*({\varvec{x}}_{q}) \\ \end{array} \right. \end{aligned}$$
(8)

Moreover, the training data (1) is free of noise. Correspondingly, we also assume that classifiers produce deterministic predictions \(h({\varvec{x}}) \in \{ 0, 1 \}\) in the form of probabilities 0 or 1. Finally, we assume that \(f^* \in {\mathcal{H}}\), and therefore \(h^* = f^*\) (which means there is no model uncertainty).

Under these assumptions, a hypothesis \(h \in {\mathcal{H}}\) can be eliminated as a candidate as soon as it makes at least one mistake on the training data: in that case, the risk of h is necessarily higher than the risk of \(h^*\) (which is 0). The idea of the candidate elimination algorithm (Mitchell 1977) is to maintain the version space \({\mathcal{V}} \subseteq {\mathcal{H}}\) that consists of the set of all hypotheses consistent with the data seen so far:

$$\begin{aligned} {\mathcal{V}} = {\mathcal{V}}({\mathcal{H}}, {\mathcal{D}}) {:}{=}\Big \{ h \in {\mathcal{H}}\, | \,h({\varvec{x}}_i) = y_i \text { for } i = 1, \ldots , N \Big \} \end{aligned}$$
(9)

Obviously, the version space is shrinking with an increasing amount of training data, i.e., \({\mathcal{V}}({\mathcal{H}}, {\mathcal{D}}') \subseteq {\mathcal{V}}({\mathcal{H}}, {\mathcal{D}})\) for \({\mathcal{D}}\subseteq {\mathcal{D}}'\).

If a prediction \(\hat{y}_{q}\) for a query instance \({\varvec{x}}_{q}\) is sought, this query is submitted to all members \(h \in {\mathcal{V}}\) of the version space. Obviously, a unique prediction can only be made if all members agree on the outome of \({\varvec{x}}_{q}\). Otherwise, several outcomes \(y \in {\mathcal{Y}}\) may still appear possible. Formally, mimicking the logical conjunction with the minimum operator and the existential quantification with a maximum, we can express the degree of possibility or plausibility of an outcome \(y \in {\mathcal{Y}}\) as follows (\(\llbracket \cdot \rrbracket\) denotes the indicator function):

$$\begin{aligned} \pi ( y) {:}{=}\max _{h \in {\mathcal{H}}} \min \Big ( \llbracket h \in {\mathcal{V}} \rrbracket , \llbracket h({\varvec{x}}_q) = y \rrbracket \Big ) \end{aligned}$$
(10)

Thus, \(\pi (y)=1\) if there exists a candidate hypothesis \(h \in {\mathcal{V}}\) such that \(h({\varvec{x}}_{q}) = y\), and \(\pi (y)=0\) otherwise. In other words, the prediction produced in version space learning is a subset

$$\begin{aligned} Y = Y({\varvec{x}}_q) {:}{=}\{ h({\varvec{x}}_q) \, | \,h \in {\mathcal{V}}\} = \{ y \, | \,\pi (y) = 1 \} \subseteq {\mathcal{Y}} \end{aligned}$$
(11)

See Fig. 7 for an illustration.

Fig. 7
figure 7

Illustration of version space learning inference. The version space \({\mathcal{V}}\), which is the subset of hypotheses consistent with the data seen so far, represents the state of knowledge of the learner. For a query \({\varvec{x}}_{q} \in {\mathcal{X}}\), the set of possible predictions is given by the set \(Y = \{ h({\varvec{x}}_q) \, | \,h \in {\mathcal{V}}\}\). The distribution on the right is the characteristic function \(\pi :\, {\mathcal{Y}} \longrightarrow \{ 0,1 \}\) of this set, which can be interpreted as a \(\{0,1\}\)-valued possibility distribution (indicating whether y is a possible outcome or not, i.e., \(\pi (y) = \llbracket y \in Y \rrbracket\))

Fig. 8
figure 8

Two examples illustrating predictive uncertainty in version space learning for the case of binary classification. In the first example (above), the hypothesis space is given in terms of rectangles (assigning interior instances to the positive class and instances outside to the negative class). Both pictures show some of the (infinitely many) elements of the version space. Top left: Restricting \({\mathcal{H}}\) to axis-parallel rectangles, the first query point is necessarily positive, the second one necessarily negative, because these predictions are produced by all \(h \in {\mathcal{V}}\). As opposed to this, both positive and negative predictions can be produced for the third query. Top right: Increasing flexibility (or weakening prior knowledge) by enriching the hypothesis space and also allowing for non-axis-parallel rectangles, none of the queries can be classified with certainty anymore. In the second example (below), hypotheses are linear and quadratic discriminant functions, respectively. Bottom left: If the hypothesis space is given by linear discriminant functions, all hypothesis consistent with the data will predict the query instance as positive. Bottom right: Enriching the hypothesis space with quadratic discriminants, the members of the version space will no longer vote unanimously: some of them predict the positive and others the negative class

Note that the inference (10) can be seen as a kind of constraint propagation, in which the constraint \(h \in {\mathcal{V}}\) on the hypothesis space \({\mathcal{H}}\) is propagated to a constraint on \({\mathcal{Y}}\), expressed in the form of the subset (11) of possible outcomes; or symbolically:

$$\begin{aligned} {\mathcal{H}}, {\mathcal{D}}, {\varvec{x}}_{q} \models Y \end{aligned}$$
(12)

This view highlights the interaction between prior knowledge and data: It shows that what can be said about the possible outcomes \(y_{q}\) not only depends on the data \({\mathcal{D}}\) but also on the hypothesis space \({\mathcal{H}}\), i.e., the model assumptions the learner starts with. The specification of \({\mathcal{H}}\) always comes with an inductive bias, which is indeed essential for learning from data (Mitchell 1980). In general, both aleatoric and epistemic uncertainty (ignorance) depend on the way in which prior knowledge and data interact with each other. Roughly speaking, the stronger the knowledge the learning process starts with, the less data is needed to resolve uncertainty. In the extreme case, the true model is already known, and data is completely superfluous. Normally, however, prior knowledge is specified by assuming a certain type of model, for example a linear relationship between inputs \({\varvec{x}}\) and outputs y. Then, all else (namely the data) being equal, the degree of predictive uncertainty depends on how flexible the corresponding model class is. Informally speaking, the more restrictive the model assumptions are, the smaller the uncertainty will be. This is illustrated in Fig. 8 for the case of binary classification.

Coming back to our discussion about uncertainty, it is clear that version space learning as outlined above does not involve any kind of aleatoric uncertainty. Instead, the only source of uncertainty is a lack of knowledge about \(h^*\), and hence of epistemic nature. On the model level, the amount of uncertainty is in direct correspondence with the size of the version space \({\mathcal{V}}\) and reduces with an increasing sample size. Likewise, the predictive uncertainty could be measured in terms of the size of the set (11) of candidate outcomes. Obviously, this uncertainty may differ from instance to instance, or, stated differently, approximation uncertainty may translate into prediction uncertainty in different ways.

In version space learning, uncertainty is represented in a purely set-based manner: the version space \({\mathcal{V}}\) and prediction set \(Y({\varvec{x}}_q)\) are subsets of \({\mathcal{H}}\) and \({\mathcal{Y}}\), respectively. In other words, hypotheses \(h \in {\mathcal{H}}\) and outcomes \(y \in {\mathcal{Y}}\) are only qualified in terms of being possible or not. In the following, we discuss the Bayesian approach, in which hypotheses and predictions are qualified more gradually in terms of probabilities.

3.2 Bayesian inference

Consider a hypothesis space \({\mathcal{H}}\) consisting of probabilistic predictors, that is, hypotheses h that deliver probabilistic predictions \(p_h(y \, | \,{\varvec{x}}) = p(y \, | \,{\varvec{x}}, h)\) of outcomes y given an instance \({\varvec{x}}\). In the Bayesian approach, \({\mathcal{H}}\) is supposed to be equipped with a prior distribution \(p(\cdot )\), and learning essentially consists of replacing the prior by the posterior distribution:

$$\begin{aligned} p(h \, | \,{\mathcal{D}}) \, = \frac{p(h) \cdot p({\mathcal{D}}\, | \,h)}{p({\mathcal{D}})} \, \propto \, p(h) \cdot p({\mathcal{D}}\, | \,h) \, , \end{aligned}$$
(13)

where \(p({\mathcal{D}}\, | \,h)\) is the probability of the data given h (the likelihood of h). Intuitively, \(p( \cdot \, | \,{\mathcal{D}})\) captures the learner’s state of knowledge, and hence its epistemic uncertainty: The more “peaked” this distribution, i.e., the more concentrated the probability mass in a small region in \({\mathcal{H}}\), the less uncertain the learner is. Just like the version space \({\mathcal{V}}\) in version space learning, the posterior distribution on \({\mathcal{H}}\) provides global (averaged/aggregated over the entire instance space) instead of local, per-instance information. For a given query instance \({\varvec{x}}_{q}\), this information may translate into quite different representations of the uncertainty regarding the prediction \(\hat{y}_{q}\) (cf. Fig. 9).

Fig. 9
figure 9

Illustration of Bayesian inference. Updating a prior distribution on \({\mathcal{H}}\) based on training data \({\mathcal{D}}\), a posterior distribution \(p(h \, | \,{\mathcal{D}})\) is obtained, which represents the state of knowledge of the learner (middle). Given a query \({\varvec{x}}_{q} \in {\mathcal{X}}\), the predictive posterior distribution on \({\mathcal{Y}}\) is then obtained through Bayesian model averaging: Different hypotheses \(h \in {\mathcal{H}}\) provide predictions, which are aggregated in terms of a weighted average

More specifically, the representation of uncertainty about a prediction \(\hat{y}_{q}\) is given by the image of the posterior \(p( h \, | \,{\mathcal{D}})\) under the mapping \(h \mapsto p(y \, | \,{\varvec{x}}_{q}, h)\) from hypotheses to probabilities of outcomes. This yields the predictive posterior distribution

$$\begin{aligned} p(y \, | \,{\varvec{x}}_{q}) = \int _{\mathcal{H}}p(y \, | \,{\varvec{x}}_{q} , h) \, d\, P(h \, | \,{\mathcal{D}}) . \end{aligned}$$
(14)

In this type of (proper) Bayesian inference, a final prediction is thus produced by model averaging: The predicted probability of an outcome \(y \in {\mathcal{Y}}\) is the expected probability \(p(y \, | \,{\varvec{x}}_{q}, h)\), where the expectation over the hypotheses is taken with respect to the posterior distribution on \({\mathcal{H}}\); or, stated differently, the probability of an outcome is a weighted average over its probabilities under all hypotheses in \({\mathcal{H}}\), with the weight of each hypothesis h given by its posterior probability \(p( h \, | \,{\mathcal{D}})\). Since model averaging is often difficult and computationally costly, sometimes only the single hypothesis

$$\begin{aligned} h^{map} {:}{=}{\mathop {\hbox {argmax}}\limits _{h \in {\mathcal{H}}}} p( h \, | \,{\mathcal{D}}) \end{aligned}$$
(15)

with the highest posterior probability is adopted, and predictions on outcomes are derived from this hypothesis.

In (14), aleatoric and epistemic uncertainty are not distinguished any more, because epistemic uncertainty is “averaged out.” Consider the example of coin flipping, and let the hypothesis space be given by \({\mathcal{H}} {:}{=}\{ h_\alpha \, | \,0 \le \alpha \le 1 \}\), where \(h_\alpha\) is modeling a biased coin landing heads up with a probability of \(\alpha\) and tails up with a probability of \(1- \alpha\). According to (14), we derive a probability of 1/2 for heads and tails, regardless of whether the (posterior) distribution on \({\mathcal{H}}\) is given by the uniform distribution (all coins are equally probable, i.e., the case of complete ignorance) or the one-point measure assigning probability 1 to \(h_{1/2}\) (the coin is known to be fair with complete certainty):

$$\begin{aligned} p(y ) = \int _{\mathcal{H}}\alpha \, d\, P= \frac{1}{2} = \int _{\mathcal{H}}\alpha \, d\, P' \end{aligned}$$

for the uniform measure \(P\) (with probability density function \(p(\alpha ) \equiv 1\)) as well as the measure \(P'\) with probability mass function \(p'(\alpha ) = 1\) if \(\alpha = 1/2\) and \(= 0\) for \(\alpha \ne 1/2\). Obviously, MAP inference (15) does not capture epistemic uncertainty either.

More generally, consider the case of binary classification with \({\mathcal{Y}}{:}{=}\{ -1, +1 \}\) and \(p_h(+1 \, | \,{\varvec{x}}_{q})\) the probability predicted by the hypothesis h for the positive class. Instead of deriving a distribution on \({\mathcal{Y}}\) according to (14), one could also derive a predictive distribution for the (unknown) probability \(q {:}{=}p(+1 \, | \,{\varvec{x}}_q)\) of the positive class:

$$\begin{aligned} p(q \, | \,{\varvec{x}}_{q}) = \int _{{\mathcal{H}}} \, \llbracket \, p(+1 \, | \,{\varvec{x}}_{q} , h) = q \, \rrbracket \, d\, P(h \, | \,{\mathcal{D}}) . \end{aligned}$$
(16)

This is a second-order probability, which still contains both aleatoric and epistemic uncertainty. The question of how to quantify the epistemic part of the uncertainty is not at all obvious, however. Intuitively, epistemic uncertainty should be reflected by the variability of the distribution (16): the more spread the probability mass over the unit interval, the higher the uncertainty seems to be. But how to put this into quantitative terms? Entropy is arguably not a good choice, for example, because this measure is invariant against redistribution of probability mass. For instance, the distributions \(p\) and \(p'\) with \(p(q \, | \,{\varvec{x}}_{q}) {:}{=}12(q-1/2)^2\) and \(p'(q \, | \,{\varvec{x}}_{q}) {:}{=}3 - p(q \, | \,{\varvec{x}}_{q})\) both have the same entropy, although they correspond to quite different states of information. From this point of view, the variance of the distribution would be better suited, but this measure has other deficiencies (for example, it is not maximized by the uniform distribution, which could be considered as a case of minimal informedness).

The difficulty of specifying epistemic uncertainty in the Bayesian approach is rooted in the more general difficulty of representing a lack of knowledge in probability theory. This issue will be discussed next.

3.3 Representing a lack of knowledge

As already said, uncertainty is captured in a purely set-based way in version space learning: \({\mathcal{V}} \subseteq {\mathcal{H}}\) is a set of candidate hypotheses, which translates into a set of candidate outcomes \(Y \subseteq {\mathcal{Y}}\) for a query \({\varvec{x}}_{q}\). In the case of sets, there is a rather obvious correspondence between the degree of uncertainty in the sense of a lack of knowledge and the size of the set of candidates: Proceeding from a reference set \(\Omega\) of alternatives, assuming some ground-truth \(\omega ^* \in \Omega\), and expressing knowledge about the latter in terms of a subset \(C \subseteq \Omega\) of possible candidates, we can clearly say that the bigger C, the larger the lack of knowledge. More specifically, for finite \(\Omega\), a common uncertainty measure in information theory is \(\log (|C|)\). Consequently, knowledge gets weaker by adding additional elements to C and stronger by removing candidates.

In standard probabilistic modeling and Bayesian inference, where knowledge is conveyed by a distribution p on \(\Omega\), it is much less obvious how to “weaken” this knowledge. This is mainly because the total amount of belief is fixed in terms of a unit mass that can be distributed among the elements \(\omega \in \Omega\). Unlike for sets, where an additional candidate \(\omega\) can be added or removed without changing the plausibility of all other candidates, increasing the weight of one alternative \(\omega\) requires decreasing the weight of another alternative \(\omega '\) by exactly the same amount.

Of course, there are also measures of uncertainty for probability distributions, most notably the (Shannon) entropy, which, for finite \(\Omega\), is given as follows:

$$\begin{aligned} H(p) {:}{=}- \sum _{\omega \in \Omega } p(\omega ) \, \log p(\omega ) \end{aligned}$$
(17)

However, these measures are primarily capturing the shape of the distribution, namely its “peakedness” or non-uniformity (Dubois and Hüllermeier 2007), and hence inform about the predictability of the outcome of a random experiment. Seen from this point of view, they are more akin to aleatoric uncertainty, whereas the set-based approach (i.e., representing knowledge in terms of a subset \(C \subseteq \Omega\) of candidates) is arguably better suited for capturing epistemic uncertainty.

For these reasons, it has been argued that probability distributions are less suitable for representing ignorance in the sense of a lack of knowledge (Dubois et al. 1996). For example, the case of complete ignorance is typically modeled in terms of the uniform distribution \(p\equiv 1/|\Omega |\) in probability theory; this is justified by the “principle of indifference” invoked by Laplace, or by referring to the principle of maximum entropyFootnote 4. Then, however, it is not possible to distinguish between (i) precise (probabilistic) knowledge about a random event, such as tossing a fair coin, and (ii) a complete lack of knowledge due to an incomplete description of the experiment. This was already pointed out by the famous Ronald Fisher, who noted that “not knowing the chance of mutually exclusive events and knowing the chance to be equal are two quite different states of knowledge.”

Another problem in this regard is caused by the measure-theoretic grounding of probability and its additive nature. For example, the uniform distribution is not invariant under reparametrization (a uniform distribution on a parameter \(\omega\) does not translate into a uniform distribution on \(1/\omega\), although ignorance about \(\omega\) implies ignorance about \(1/\omega\)). Likewise, expressing ignorance about the length x of a cube in terms of a uniform distribution on an interval [lu] does not yield a uniform distribution of \(x^3\) on \([l^3 , u^3]\), thereby suggesting some degree of informedness about its volume. Problems of this kind render the use of a uniform prior distribution, often interpreted as representing epistemic uncertainty in Bayesian inference, at least debatable.Footnote 5

The argument that a single (probability) distribution is not enough for representing uncertain knowledge is quite prominent in the literature. Correspondingly, various generalizations of standard probability theory have been proposed, including imprecise probability (Walley 1991), evidence theory (Shafer 1976; Smets and Kennes 1994), and possibility theory (Dubois and Prade 1988). These formalisms are essentially working with sets of probability distributions, i.e., they seek to take advantage of the complementary nature of sets and distributions, and to combine both representations in a meaningful way (cf. Fig. 10). We also refer to Appendix A for a brief overview.

Fig. 10
figure 10

Set-based versus distributional knowledge representation on the level of predictions, hypotheses, and models. In version space learning, the model class is fixed, and knowledge about hypotheses and outcomes is represented in terms of sets (blue color). In Bayesian inference, sets are replaced by probability distributions (gray shading). Theories like possibility and evidence theory are in-between, essentially working with sets of distributions. Credal inference (cf. Sect. 4.6) generalizes Bayesian inference by replacing a single prior with a set of prior distributions (here identified by a set of hyper-parameters). In hierarchical Bayesian modeling, this set is again replaced by a distribution, i.e., a second-order probability

4 Machine learning methods for representing uncertainty

This section presents several important machine learning methods that allow for representing the learner’s uncertainty in a prediction. They differ with regard to the type of prediction produced and the way in which uncertainty is represented. Another interesting question is whether they allow for distinguishing between aleatoric and epistemic uncertainty, and perhaps even for quantifying the amount of uncertainty in terms of degrees of aleatoric and epistemic (and total) uncertainty.

Structuring the methods or arranging them in the form of a taxonomy is not an easy task, and there are various possibilities one may think of. As shown in the overview in Fig. 11, a possible distinction one may think of is whether a method is rooted in classical, frequentist statistics (building on methods for maximum likelihood inference and parameter estimation like in Sect. 4.2, density estimation like in Sect. 4.3, or hypothesis testing like in Sect. 4.8), or whether it is more inspired by Bayesian inference. Another distinction that can be made is between uncertainty quantification and set-valued prediction: Most of the methods produce a prediction and equip this prediction with additional information about the certainty or uncertainty in that prediction. The other way around, one may of course also pre-define a desired level of certainty, and then produce a prediction that complies with this requirement—naturally, such a prediction appears in the form of a set of candidates that is sufficiently likely to contain the ground truth.

The first three sections are related to classical statistics and probability estimation. Machine learning methods for probability estimation (Sect. 4.1), i.e., for training a probabilistic predictor, often commit to a single hypothesis learned on the data, thereby ignoring epistemic uncertainty. Instead, predictions obtained by this hypothesis essentially represent aleatoric uncertainty. As opposed to this, Sect. 4.2 discusses likelihood-based inference and Fisher information, which allows for deriving confidence regions that reflect epistemic uncertainty about the hypothesis. The focus here (like in frequentist statistics in general) is more on parameter estimation and less on prediction. Section 4.3 presents methods for learning generative models, which essentially comes down to the problem of density estimation. Here, predictions and uncertainty quantification are produced on the basis of class-conditional density functions.

The second block of methods is more in the spirit of Bayesian machine learning. Here, knowledge and uncertainty about the sought (Bayes-optimal) hypothesis is maintained explicitly in the form of a distribution on the hypothesis space (Sects. 4.4 and 4.5), a set of distributions (Sect. 4.6), or a graded set of distributions (Sect. 4.7), and this knowledge is updated in the light of additional observations (see also Fig. 10). Correspondingly, all these methods provide information about both aleatoric and epistemic uncertainty. Perhaps the most well-known approach in the realm of Bayesian machine learning is Gaussian processes, which are discussed in Sect. 4.4. Another popular approach is deep learning and neural networks, and especially interesting from the point of view of uncertainty quantification is recent work on “Bayesian” extensions of such networks. Section 4.6 addresses the concept of credal sets, which is closely connected to the theory of imprecise probabilities, and which can be seen as a generalization of Bayesian inference where a single prior distribution is replaced by a set of priors. Section 4.7 presents an approach to reliable prediction that combines set-based and distributional (probabilistic) inference, and which, to the best of our knowledge, was the first to explicitly motivate the distinction between aleatoric and epistemic uncertainty in a machine learning context.

Finally, methods focusing on set-valued prediction are discussed in Sects. 4.8 and 4.9. Conformal prediction is rooted in frequentist statistics and hypothesis testing. Roughly speaking, to decide whether to include or exclude a candidate prediction, the correctness of that prediction is tested as a hypothesis. The second approach makes use of concepts from statistical (Bayesian) decision theory and constructs set-valued predictions based on the principle of expected utility maximization.

As already said, the categorization of methods is certainly not unique. For example, as the credal approach is of a “set-valued” nature (starting with a set of priors, a set of posterior distributions is derived, and from these in turn a set of predictions), it has a strong link to set-valued prediction (as indicated by the dashed line in Fig. 11). Likewise, the approach to reliable classification in Sect. 4.7 makes use of the concept of normalized likelihood, and hence has a strong connection to likelihood-based inference.

Fig. 11
figure 11

Overview of the methods presented in Sect. 4

4.1 Probability estimation via scoring, calibration, and ensembling

There are various methods in machine learning for inducing probabilistic predictors. These are hypotheses h that do not merely output point predictions \(h({\varvec{x}}) \in {\mathcal{Y}}\), i.e., elements of the output space \({\mathcal{Y}}\), but probability estimates \(p_h(\cdot \, | \,{\varvec{x}}) = p(\cdot \, | \,{\varvec{x}}, h)\), i.e., complete probability distributions on \({\mathcal{Y}}\). In the case of classification, this means predicting a single (conditional) probability \(p_h(y \, | \,{\varvec{x}}) = p(y \, | \,{\varvec{x}} , h)\) for each class \(y \in {\mathcal{Y}}\), whereas in regression, \(p( \cdot \, | \,{\varvec{x}}, h)\) is a density function on \(\mathbb {R}\). Such predictors can be learned in a discriminative way, i.e., in the form of a mapping \({\varvec{x}} \mapsto p( \cdot \, | \,{\varvec{x}})\), or in a generative way, which essentially means learning a joint distribution on \({\mathcal{X}} \times {\mathcal{Y}}\). Moreover, the approaches can be parametric (assuming specific parametric families of probability distributions) or non-parametric. Well-known examples include classical statistical methods such as logistic and linear regression, Bayesian approaches such as Bayesian networks and Gaussian processes (cf. Sect. 4.4), as well as various techniques in the realm of (deep) neural networks (cf. Sect. 4.5).

Training probabilistic predictors is typically accomplished by minimizing suitable loss functions, i.e., loss functions that enforce “correct” (conditional) probabilities as predictions. In this regard, proper scoring rules (Gneiting and Raftery 2005) play an important role, including the log-loss as a well-known special case. Sometimes, however, estimates are also obtained in a very simple way, following basic frequentist techniques for probability estimation, like in Naïve Bayes or nearest neighbor classification.

The predictions delivered by corresponding methods are at best “pseudo-probabilities” that are often not very accurate. Besides, there are many methods that deliver natural scores, intuitively expressing a degree of confidence (like the distance from the separating hyperplane in support vector machines), but which do not immediately qualify as probabilities either. The idea of scaling or calibration methods is to turn such scores into proper, well-calibrated probabilities, that is, to learn a mapping from scores to the unit interval that can be applied to the output of a predictor as a kind of post-processing step (Flach 2017). Examples of such methods include binning (Zadrozny and Elkan 2001), isotonic regression (Zadrozny and Elkan 2002), logistic scaling (Platt 1999) and improvements thereof (Kull et al. 2017), as well as the use of Venn predictors (Johansson et al. 2018). Calibration is still a topic of ongoing research.

Another import class of methods is ensemble learning, such as bagging or boosting, which are especially popular in machine learning due to their ability to improve accuracy of (point) predictions. Since such methods produce a (large) set of predictors \(h_1, \ldots , h_M\) instead of a single hypothesis, it is tempting to produce probability estimates following basic frequentist principles. In the simplest case (of classification), each prediction \(h_i({\varvec{x}})\) can be interpreted as a “vote” in favor of a class \(y \in {\mathcal{Y}}\), and probabilities can be estimated by relative frequencies—needless to say, probabilities constructed in this way tend to be biased and are not necessarily well calibrated. Especially important in this field are tree-based methods such as random forests (Breiman 2001; Kruppa et al. 2014).

Obviously, while standard probability estimation is a viable approach to representing uncertainty in a prediction, there is no explicit distinction between different types of uncertainty. Methods falling into this category are mostly concerned with the aleatoric part of the overall uncertainty.Footnote 6

4.2 Maximum likelihood estimation and Fisher information

The notion of likelihood is one of the key elements of statistical inference in general, and maximum likelihood estimation is an omnipresent principle in machine learning as well. Indeed, many learning algorithms, including those used to train deep neural networks, realize model induction as likelihood maximization. Besides, there is a close connection between likelihood-based and Bayesian inference, as in many cases, the former is effectively equivalent to the latter with a uniform prior—disregarding, of course, important conceptual differences and philosophical foundations.

Adopting the common perspective of classical (frequentist) statistics, consider a data-generating process specified by a parametrized family of probability measures \(P_{{\varvec{\theta}}}\), where \({\varvec{\theta}} \in \Theta\) is a parameter vector. Let \(f_{{\varvec{\theta}}}( \cdot )\) denote the density (or mass) function of \(P_{{\varvec{\theta}}}\), i.e., \(f_{{\varvec{\theta}}}( X)\) is the probability to observe the data X when sampling from \(P_{{\varvec{\theta}}}\). An important problem in statistical inference is to estimate the parameter \({\varvec{\theta}}\), i.e., to identify the underlying data-generating process \(P_{{\varvec{\theta}}}\), on the basis of a set of observations \({\mathcal{D}} = \{ X_1, \ldots , X_N \}\), and maximum likelihood estimation (MLE) is a general principle for tackling this problem. More specifically, the MLE principle prescribes to estimate \({\varvec{\theta}}\) by the maximizer of the likelihood function, or, equivalently, the log-likelihood function. Assuming that \(X_1, \ldots , X_N\) are independent, and hence that \({\mathcal{D}}\) is distributed according to \(( P_{{\varvec{\theta}}})^N\), the log-likelihood function is given by

$$\begin{aligned} \ell _N({\varvec{\theta}}) {:}{=}\sum _{n=1}^N \log f_{{\varvec{\theta}}}(X_n ) \, . \end{aligned}$$

An important result of mathematical statistics states that the distribution of the maximum likelihood estimate \(\hat{{\varvec{\theta}}}\) is asymptotically normal, i.e., converges to a normal distribution as \(N \rightarrow \infty\). More specifically, \(\sqrt{N}(\hat{{\varvec{\theta}}} - {\varvec{\theta}})\) converges to a normal distribution with mean 0 and covariance matrix \({\mathcal{l}}_N^{-1}({\varvec{\theta}})\), where

$$\begin{aligned} {\mathcal{l}}_N({\varvec{\theta}}) = - \left[ {\mathbf{E}}_{{\varvec{\theta}}} \left( \frac{\partial ^2 \ell _N}{\partial \theta _i \, \partial \theta _j} \right) \right] _{1 \le i,j \le N} \end{aligned}$$

is the Fisher information matrix (the negative Hessian of the log-likelihood function at \({\varvec{\theta}})\). This result has many implications, both theoretically and practically. For example, the Fisher information plays an important role in the Cramér-Rao bound and provides a lower bound to the variance of any unbiased estimator of \({\varvec{\theta}}\) (Frieden 2004).

Moreover, the Fisher information matrix allows for constructing (approximate) confidence regions for the sought parameter \({\varvec{\theta}}\) around the estimate \(\hat{{\varvec{\theta}}}\) (approximatingFootnote 7\({\mathcal{l}}_N({\varvec{\theta}})\) by \({\mathcal{l}}_N(\hat{{\varvec{\theta}}})\)). Obviously, the larger this region, the higher the (epistemic) uncertainty about the true model \({\varvec{\theta}}\). Roughly speaking, the size of the confidence region is in direct correspondence with the “peakedness” of the likelihood function around its maximum. If the likelihood function is peaked, small changes of the data do not change the estimation \(\hat{{\varvec{\theta}}}\) too much, and the learner is relatively sure about the true model (data-generating process). As opposed to this, a flat likelihood function reflects a high level of uncertainty about \({\varvec{\theta}}\), because there are many parameters that are close to \(\hat{{\varvec{\theta}}}\) and have a similar likelihood.

In a machine learning context, where parameters \({\varvec{\theta}}\) may identify hypotheses \(h = h_{{\varvec{\theta}}}\), a confidence region for the former can be seen as a representation of epistemic uncertainty about \(h^*\), or, more specifically, approximation uncertainty. To obtain a quantitative measure of uncertainty, the Fisher information matrix can be summarized in a scalar statistic, for example the trace (of the inverse) or the smallest eigenvalue. Based on corresponding measures, Fisher information has been used, among others, for optimal statistical design (Pukelsheim 2006) and active machine learning (Sourati et al. 2018).

4.3 Generative models

Fig. 12
figure 12

If a hypothesis space is very rich, so that it can essentially produce every decision boundary, the epistemic uncertainty is high in sparsely populated regions without training examples. The lower query point in the left picture, for example, could easily be classified as red (middle picture) but also as black (right picture). In contrast to this, the second query is in a region where the two class distributions are overlapping, and where the aleatoric uncertainty is high

As already explained, for learning methods with a very large hypothesis space \({\mathcal{H}}\), model uncertainty essentially disappears, while approximation uncertainty might be high (cf. Sect. 2.4). In particular, this includes learning methods with high flexibility, such as (deep) neural networks and nearest neighbors. In methods of that kind, there is no explicit assumption about a global structure of the dependency between inputs \({\varvec{x}}\) and outcomes y. Therefore, inductive inference will essentially be of a local nature: A class y is approximated by the region in the instance space in which examples from that class have been seen, aleatoric uncertainty occurs where such regions are overlapping, and epistemic uncertainty where no examples have been encountered so far (cf. Fig. 12).

An intuitive idea, then, is to consider generative models to quantify epistemic uncertainty. Such approaches typically look at the densities \(p({\varvec{x}})\) to decide whether input points are located in regions with high or low density, in which the latter acts as a proxy for a high epistemic uncertainty. The density \(p({\varvec{x}})\) can be estimated with traditional methods, such as kernel density estimation or Gaussian mixtures. Yet, novel density estimation methods still appear in the machine learning literature. Some more recent approaches in this area include isolation forests (Liu et al. 2009), auto-encoders (Goodfellow et al. 2016), and radial basis function networks (Bazargami and Mac-Namee 2019).

As shown by these methods, density estimation is also a central building block in anomaly and outlier detection. Often, a threshold is applied on top of the density \(p({\varvec{x}})\) to decide whether a data point is an outlier or not. For example, in auto-encoders, a low-dimensional representation of the original input in constructed, and the reconstruction error can be used as a measure of support of a data point within the underlying distribution. Methods of that kind can be classified as semi-supervised outlier detection methods, in contrast to supervised methods, which use annotated outliers during training. Many semi-supervised outlier detection methods are inspired by one-class support vector machines (SVMs), which fit a hyperplane that separates outliers from “normal” data points in a high-dimensional space (Khan and Madden 2014). Some variations exist, where for example a hypersphere instead of a hyperplane is fitted to the data (Tax and Duin 2004).

Outlier detection is also somewhat related to the setting of classification with a reject option, e.g., when the classifier refuses to predict a label in low-density regions (Perello-Nieto et al. 2016). For example, Ziyin et al. (2019) adopt this viewpoint in an optimization framework where an outlier detector and a standard classifier are jointly optimized. Since a data point is rejected if it is an outlier, the focus is here on epistemic uncertainty. In contrast, most papers on classification with reject option employ a reasoning on conditional class probabilities \(p(y \, | \,{\varvec{x}})\), using specific utility scores. These approaches rather capture aleatoric uncertainty (see also Sect. 4.9), showing that seemingly related papers adopting the same terminology can have different notions of uncertainty in mind.

In recent papers, outlier detection is often combined with the idea of set-valued prediction (to be further discussed in Sects. 4.6 and 4.8). Here, three scenarios can occur for a multi-class classifier: (i) A single class is predicted. In this case, the epistemic and aleatoric uncertainty are both assumed to be sufficiently low. (ii) The “null set” (empty set) is predicted, i.e., the classifier abstains when there is not enough support for making a prediction. Here, the epistemic uncertainty is too high to make a prediction. (iii) A set of cardinality bigger than one is predicted. Here, the aleatoric uncertainty is too high, while the epistemic uncertainty is assumed to be sufficiently low.

Hechtlinger et al. (2019) implement this idea by fitting a generative model \(p({\varvec{x}}\, | \,y)\) per class, and predicting null sets for data points that have a too low density for any of those generative models. A set of cardinality one is predicted when the data point has a sufficiently high density for exactly one of the models \(p({\varvec{x}} \, | \,y)\), and a set of higher cardinality is returned when this happens for more than one of the models. Sondhi et al. (2019) propose a different framework with the same reasoning in mind. Here, a pair of models is fitted in a joint optimization problem. The first model acts as an outlier detector that intends to reject as few instances as possible, whereas the second model optimizes a set-based utility score. The joint optimization problem is therefore a linear combination of two objectives that capture, respectively, components of epistemic and aleatoric uncertainty.

Generative models often deliver intuitive solutions to the quantification of epistemic and aleatoric uncertainty. In particular, the distinction between the two types of uncertainty explained above corresponds to what Hüllermeier and Brinker (2008) called “conflict” (overlapping distributions, evidence in favor of more than one class) and “ignorance” (sparsely populated region, lack of evidence in favor of any class). Yet, like essentially all other methods discussed in this section, they also have disadvantages. An inherent problem with semi-supervised outlier detection methods is how to define the threshold that decides whether a data point is an outlier or not. Another issue is how to choose the model class for the generative model. Density estimation is a difficult problem, which, to be successful, typically requires a lot of data. When the sample size is small, specifying the right model class is not an easy task, so the model uncertainty will typically be high.

4.4 Gaussian processes

Just like in likelihood-based inference, the hypothesis space in Bayesian inference (cf. Sect. 3.2) is often parametrized in the sense that each hypothesis \(h = h_{{\varvec{\theta}}} \in {\mathcal{H}}\) is (uniquely) identified by a parameter (vector) \({\varvec{\theta}} \in \Theta \subseteq \mathbb {R}^d\) of fixed dimensionality d. Thus, computation of the posterior (13) essentially comes down to updating beliefs about the true (or best) parameter, which is treated as a multivariate random variable:

$$\begin{aligned} p({\varvec{\theta}} \, | \,{\mathcal{D}}) \, \propto \, p({\varvec{\theta}}) \cdot p({\mathcal{D}}\, | \,{\varvec{\theta}}) . \end{aligned}$$
(18)

Gaussian processes (Seeger 2004) generalize the Bayesian approach from inference about multivariate (but finite-dimensional) random variables to inference about (infinite-dimensional) functions. Thus, they can be thought of as distributions not just over random vectors but over random functions.

More specifically, a stochastic process in the form of a collection of random variables \(\{ f({\varvec{x}}) \, | \,{\varvec{x}} \in {\mathcal{X}} \}\) with index set \({\mathcal{X}}\) is said to be drawn from a Gaussian process with mean function m and covariance function k, denoted \(f \sim {\mathcal{GP}}(m, k)\), if for any finite set of elements \({\varvec{x}}_1, \ldots , {\varvec{x}}_m \in {\mathcal{X}}\), the associated finite set of random variables \(f({\varvec{x}}_1), \ldots , f({\varvec{x}}_m)\) has the following multivariate normal distribution:

$$\begin{aligned} \left[ \begin{matrix} f({\varvec{x}}_1) \\ f({\varvec{x}}_2) \\ \vdots \\ f({\varvec{x}}_m) \end{matrix} \right] \sim {\mathcal{N}} \left( \left[ \begin{matrix} m({\varvec{x}}_1) \\ m({\varvec{x}}_2) \\ \vdots \\ m({\varvec{x}}_m) \end{matrix} \right] , \left[ \begin{matrix} k({\varvec{x}}_1, {\varvec{x}}_1) &{} \cdots &{} k({\varvec{x}}_1, {\varvec{x}}_m) \\ \vdots &{} \ddots &{} \vdots \\ k({\varvec{x}}_m, {\varvec{x}}_1) &{} \cdots &{} k({\varvec{x}}_m, {\varvec{x}}_m) \end{matrix} \right] \right) \end{aligned}$$

Note that the above properties imply

$$\begin{aligned} m({\varvec{x}})&= {\mathbf{E}}( f({\varvec{x}}) ) \, , \\ k({\varvec{x}}, {\varvec{x}}')&= {\mathbf{E}}\big ( (f({\varvec{x}}) - m( {\varvec{x}} )) (f({\varvec{x}}') - m( {\varvec{x}}' )) \big ) \, , \end{aligned}$$

and that k needs to obey the properties of a kernel function (so as to guarantee proper covariance matrices). Intuitively, a function f drawn from a Gaussian process prior can be thought of as a (very) high-dimensional vector drawn from a (very) high-dimensional multivariate Gaussian. Here, each dimension of the Gaussian corresponds to an element \({\varvec{x}}\) from the index set \({\mathcal{X}}\), and the corresponding component of the random vector represents the value of \(f({\varvec{x}})\).

Gaussian processes allow for doing proper Bayesian inference (13) in a non-parametric way: Starting with a prior distribution on functions \(h \in {\mathcal{H}}\), specified by a mean function m (often the zero function) and kernel k, this distribution can be replaced by a posterior in light of observed data \({\mathcal{D}}= \{ ({\varvec{x}}_i , y_i ) \}_{i=1}^N\), where an observation \(y_i = f({\varvec{x}}_i) + \epsilon _i\) could be corrupted by an additional (additive) noise component \(\epsilon _i\). Likewise, a posterior predictive distribution can be obtained on outcomes \(y \in {\mathcal{Y}}\) for a new query \({\varvec{x}}_{q} \in {\mathcal{X}}\). In the general case, the computations involved are intractable, though turn out to be rather simple in the case of regression (\({\mathcal{Y}}= \mathbb {R}\)) with Gaussian noise. In this case, and assuming a zero-mean prior, the posterior predictive distribution is again a Gaussian with mean \(\mu\) and variance \(\sigma ^2\) as follows:

$$\begin{aligned} \mu&= K({\varvec{x}}_{q} , X) (K(X,X) + \sigma _\epsilon ^2 I)^{-1} \varvec{y} \, , \\ \sigma ^2&= K({\varvec{x}}_{q} , {\varvec{x}}_{q}) + \sigma _\epsilon ^2 - K({\varvec{x}}_{q} , X) (K(X,X) + \sigma _\epsilon ^2 I)^{-1} K(X, {\varvec{x}}_{q}) \, , \end{aligned}$$

where X is an \(N \times d\) matrix summarizing the training inputs (the \(i^{th}\) row of X is given by \(({\varvec{x}}_i)^\top\)), K(XX) is the kernel matrix with entries \((K(X,X))_{i,j} = k({\varvec{x}}_i , {\varvec{x}}_j)\), \(\varvec{y} = (y_1 , \ldots , y_N) \in \mathbb {R}^N\) is the vector of observed training outcomes, and \(\sigma _\epsilon ^2\) is the variance of the (additive) error term for these observations (i.e., \(y_i\) is normally distributed with expected value \(f({\varvec{x}}_i)\) and variance \(\sigma _\epsilon ^2\)).

Problems with discrete outcomes y, such as binary classification with \({\mathcal{Y}}= \{ -1, +1 \}\), are made amenable to Gaussian processes by suitable link functions, linking these outcomes with the real values \(h = h({\varvec{x}})\) as underlying (latent) variables. For example, using the logistic link function

$$\begin{aligned} p( y \, | \,h) = s(h) = \frac{1}{1 + \exp ( - y \, h)} \, , \end{aligned}$$

the following posterior predictive distribution is obtained:

$$\begin{aligned} p( y = +1 \, | \,X , \varvec{y} , {\varvec{x}}_{q}) = \int \sigma (h') \, p( h' \, | \,X , \varvec{y} , {\varvec{x}}_{q}) \, d \, h' \, , \end{aligned}$$

where

$$\begin{aligned} p( h' \, | \,X , \varvec{y} , {\varvec{x}}_{q}) = \int p( h' \, | \,X , {\varvec{x}}_{q}, \varvec{h})\, p( \varvec{h} \, | \,X, \varvec{y}) \, d \, \varvec{h} \, . \end{aligned}$$

However, since the likelihood will no longer be Gaussian, approximate inference techniques (e.g., Laplace, expectation propagation, MCMC) will be needed.

Fig. 13
figure 13

Simple one-dimensional illustration of Gaussian processes (\({\mathcal{X}}= [0,6]\)), with very few examples on the left and more examples on the right. The predictive uncertainty, as reflected by the width of the confidence band around the mean function (orange line) reduces with an increasing number of observations

As for uncertainty representation, our general remarks on Bayesian inference (cf. Sect. 3.2) obviously apply to Gaussian processes as well. In the case of regression, the variance \(\sigma ^2\) of the posterior predictive distribution for a query \({\varvec{x}}_{q}\) is a meaningful indicator of (total) uncertainty. The variance of the error term, \(\sigma _\epsilon ^2\), corresponds to the aleatoric uncertainty, so that the difference could be considered as epistemic uncertainty. The latter is largely determined by the (parameters of the) kernel function, e.g., the characteristic length-scale of a squared exponential covariance function (Blum and Riedmiller 2013). Both have an influence on the width of confidence intervals that are typically obtained for per-instance predictions, and which reflect the total amount of uncertainty (cf. Fig. 13). By determining all (hyper-) parameters of a GP, these sources of uncertainty can in principle be separated, although the estimation itself is of course not an easy task.

4.5 Deep neural networks

Work on uncertainty in deep learning fits the general framework we introduced so far, especially the Bayesian approach, quite well, although the methods proposed are specifically tailored for neural networks as a model class. A standard neural network can be seen as a probabilistic classifier h: in the case of classification, given a query \({\varvec{x}} \in {\mathcal{X}}\), the final layer of the network typically outputs a probability distribution (using transformations such as softmax) on the set of classes \({\mathcal{Y}}\), and in the case of regression, a distribution (e.g., a GaussianFootnote 8) is placed over a point prediction \(h({\varvec{x}}) \in \mathbb {R}\) (typically conceived as the expectation of the distribution). Training a neural network can essentially be seen as doing maximum likelihood inference. As such, it yields probabilistic predictions, but no information about the confidence in these probabilities. In other words, it captures aleatoric but no epistemic uncertainty.

In the context of neural networks, epistemic uncertainty is commonly understood as uncertainty about the model parameters, that is, the weights \(\varvec{w}\) of the neural network (which correspond to the parameter \(\theta\) in (18)). Obviously, this is a special case of what we called approximation uncertainty. To capture this kind of epistemic uncertainty, Bayesian neural networks (BNNs) have been proposed as a Bayesian extension of deep neural networks (Denker and LeCun 1991; Kay 1992; Neal 2012). In BNNs, each weight is represented by a probability distribution (again, typically a Gaussian) instead of a real number, and learning comes down to Bayesian inference, i.e., computing the posterior \(p( \varvec{w} \, | \,{\mathcal{D}})\). The predictive distribution of an outcome given a query instance \({\varvec{x}}_q\) is then given by

$$\begin{aligned} p(y \, | \,{\varvec{x}}_q , {\mathcal{D}} ) = \int p( y \, | \,{\varvec{x}}_q , \varvec{w}) \, p(\varvec{w} \, | \,{\mathcal{D}}) \, d \varvec{w} . \end{aligned}$$

Since the posteriors on the weights cannot be obtained analytically, approximate variational techniques are used (Jordan et al. 1999; Graves 2011), seeking a variational distribution \(q_{{\varvec{\theta}}}\) on the weights that minimizes the Kullback-Leibler divergence \({\text {KL}}(q_{{\varvec{\theta}}} \Vert p(\varvec{w} \, | \,{\mathcal{D}}))\) between \(q_{{\varvec{\theta}}}\) and the true posterior. An important example is Dropout variational inference (Gal and Ghahramani 2016), which establishes an interesting connection between (variational) Bayesian inference and the idea of using Dropout as a learning technique for neural networks. Likewise, given a query \({\varvec{x}}_{q}\), the predictive distribution for the outcome y is approximated using techniques like Monte Carlo sampling, i.e., drawing model weights from the approximate posterior distributions (again, a connection to Dropout can be established). The total uncertainty can then be quantified in terms of common measures such as entropy (in the case of classification) or variance (in the case of regression) of this distribution.

The total uncertainty, say the variance \({\mathbf{V}}(\hat{y})\) in regression, still contains both, the residual error, i.e., the variance of the observation error (aleatoric uncertainty), \(\sigma ^2_\epsilon\), and the variance due to parameter uncertainty (epistemic uncertainty). The former can be heteroscedastic, which means that \(\sigma ^2_\epsilon\) is not constant but a function of \({\varvec{x}} \in {\mathcal{X}}\). Kendall and Gal (2017) propose a way to learn heteroscedastic aleatoric uncertainty as loss attenuation. Roughly speaking, the idea is to let the neural net not only predict the conditional mean of y given \({\varvec{x}}_{q}\), but also the residual error. The corresponding loss function to be minimized is constructed in such a way (or actually derived from the probabilistic interpretation of the model) that prediction errors for points with a high residual variance are penalized less, but at the same time, a penalty is also incurred for predicting a high variance.

An explicit attempt at measuring and separating aleatoric and epistemic uncertainty (in the context of regression) is made by Depeweg et al. (2018). Their idea is as follows: They quantify the total uncertainty as well as the aleatoric uncertainty, and then obtain the epistemic uncertainty as the difference. More specifically, they propose to measure the total uncertainty in terms of the entropy of the predictive posterior distribution, which, in the case of discrete \({\mathcal{Y}}\), is given byFootnote 9

$$\begin{aligned} H \big [ \, p(y \, | \,{\varvec{x}}) \, \big ] = - \sum _{y \in {\mathcal{Y}}} p(y \, | \,{\varvec{x}}) \log _2 p(y \, | \,{\varvec{x}}) \, . \end{aligned}$$
(19)

This uncertainty also includes the (epistemic) uncertainty about the network weights \(\varvec{w}\). Fixing a set of weights, i.e., considering a distribution \(p(y \, | \,\varvec{w}, {\varvec{x}})\), thus removes the epistemic uncertainty. Therefore, the expectation over the entropies of these distributions,

$$\begin{aligned} {\mathbf{E}}_{p(\varvec{w} \, | \,{\mathcal{D}})} H \big [ p(y \, | \,\varvec{w} , {\varvec{x}}) \big ] = - \int p( \varvec{w} \, | \,{\mathcal{D}}) \left( \sum _{y \in {\mathcal{Y}}} p(y \, | \,\varvec{w}, {\varvec{x}}) \log _2 p(y \, | \,\varvec{w}, {\varvec{x}}) \right) \, d \, \varvec{w} , \end{aligned}$$
(20)

is a measure of the aleatoric uncertainty. Finally, the epistemic uncertainty is obtained as the difference

$$\begin{aligned} u_e({\varvec{x}}) {:}{=}H \big [ p(y \, | \,{\varvec{x}}) \big ] - {\mathbf{E}}_{p( \varvec{w}\, | \,{\mathcal{D}})} H \big [ p(y \, | \,\varvec{w}, {\varvec{x}}) \big ] , \end{aligned}$$
(21)

which equals the mutual information \(I(y, \varvec{w})\) between y and \(\varvec{w}\). Intuitively, epistemic uncertainty thus captures the amount of information about the model parameters \(\varvec{w}\) that would be gained through knowledge of the true outcome y. A similar approach was recently adopted by Mobiny et al. (2017), who also propose a technique to compute (21) approximately.

Bayesian model averaging establishes a natural connection between Bayesian inference and ensemble learning, and indeed, as already mentioned in Sect. 4.5, the variance of the predictions produced by an ensemble is a good indicator of the (epistemic) uncertainty in a prediction. In fact, the variance is inversely related to the “peakedness” of a posterior distribution \(p(h \, | \,{\mathcal{D}})\) and can be seen as a measure of the discrepancy between the (most probable) candidate hypotheses in the hypothesis space. Based on this idea, Lakshminarayanan et al. (2017) propose a simple ensemble approach as an alternative to Bayesian NNs, which is easy to implement, readily parallelizable, and requires little hyperparameter tuning.

Inspired by Mobiny et al. (2017), an ensemble approach is also proposed by Shaker and Hüllermeier (2020). Since this approach does not necessarily need to be implemented with neural networks as base learners,Footnote 10 we switch notation and replace weight vectors \(\varvec{w}\) by hypotheses h. To approximate the measures (21) and (20), the posterior distribution \(p( h \, | \,{\mathcal{D}})\) is represented by a finite ensemble of hypotheses \(h_1, \ldots , h_M\). An approximation of (20) can then be obtained by

$$\begin{aligned} u_a({\varvec{x}}) {:}{=}- \frac{1}{M} \sum _{i=1}^M \sum _{y \in {\mathcal{Y}}} p(y \, | \,h_i, {\varvec{x}}) \log _2 p(y \, | \,h_i, {\varvec{x}}) \, , \end{aligned}$$
(22)

an approximation of (19) by

$$\begin{aligned} u_t({\varvec{x}}) {:}{=}- \sum _{y \in {\mathcal{Y}}} \left( \frac{1}{M} \sum _{i=1}^M p(y \, | \,h_i, {\varvec{x}}) \right) \log _2 \left( \frac{1}{M} \sum _{i=1}^M p(y \, | \,h_i, {\varvec{x}}) \right) \, , \end{aligned}$$
(23)

and finally an approximation of (21) by \(u_e({\varvec{x}}) {:}{=}u_t({\varvec{x}}) - u_a({\varvec{x}})\). The latter corresponds to the Jensen-Shannon divergence (Endres and Schindelin 2003) of the distributions \(p(y \, | \,h_i, {\varvec{x}})\), \(i \in [M]\) (Malinin and Gales 2018).

4.6 Credal sets and classifiers

The theory of imprecise probabilities (Walley 1991) is largely built on the idea to specify knowledge in terms of a set Q of probability distributions instead of a single distribution. In the literature, such sets are also referred to as credal sets, and often assumed to be convex (i.e., \(q , q' \in Q\) implies \(\alpha q + (1-\alpha ) q' \in Q\) for all \(\alpha \in [0,1]\)). The concept of credal sets and related notions provide the basis of a generalization of Bayesian inference, which is especially motivated by the criticism of non-informative priors as models of ignorance in standard Bayesian inference (cf. Sect. 3.3). The basic idea is to replace a single prior distribution on the model space, as used in Bayesian inference, by a (credal) set of candidate priors. Given a set of observed data, Bayesian inference can then be applied to each candidate prior, thereby producing a (credal) set of posteriors. Correspondingly, any value or quantity derived from a posterior (e.g., a prediction, an expected value, etc.) is replaced by a set of such values or quantities. An important example of such an approach is robust inference for categorical data with the imprecise Dirichlet model, which is an extension of inference with the Dirichlet distribution as a conjugate prior for the multinomial distribution (Bernardo 2005).

Methods of that kind have also been used in the context of machine learning, for example in the framework of the Naïve Credal Classifier (Zaffalon 2002; Corani and Zaffalon 2008) or more recently as an extension of sum-product networks (Maau et al. 2017). As for the former, we can think of a Naïve Bayes (NB) classifier as a hypothesis \(h = h_{{\varvec{\theta}}}\) specified by a parameter vector \({{\varvec{\theta}}}\) that comprises a prior probability \(\theta _k\) for each class \(y_k \in {\mathcal{Y}}\) and a conditional probability \(\theta _{i,j,k}\) for observing the \(i^{th}\) value \(a_{i,j}\) of the \(j^{th}\) attribute given class \(y_k\). For a given query instance specified by attributes \((a_{i_1,1}, \ldots , a_{i_J,J})\), the posterior class probabilities are then given by

$$\begin{aligned} p(y_k \, | \,a_{i_1,1}, \ldots , a_{i_J,J}) \propto \theta _k \prod _{j=1}^J \theta _{i_j,j,k} \, . \end{aligned}$$
(24)

In the Naïve Credal Classifier, the \(\theta _k\) and \(\theta _{i_j,j,k}\) are specified in terms of (local) credal setsFootnote 11, i.e., there is a class of probability distributions Q on \({\mathcal{Y}}\) playing the role of candidate priors, and a class of probability distributions \(Q_{j,k}\) specifying possible distributions on the attribute values of the \(j^{th}\) attribute given class \(y_k\). A single posterior class probability (24) is then replaced by the set (interval) of such probabilities that can be produced by taking \(\theta _k\) from a distribution in Q and \(\theta _{i_j,j,k}\) from a distribution in \(Q_{j,k}\). As an aside, let us note that the computations involved may of course become costly.

4.6.1 Uncertainty measures for credal sets

Interestingly, there has been quite some work on defining uncertainty measures for credal sets and related representation, such as Dempster-Shafer evidence theory (Klir 1994). Even more interestingly, a basic distinction between two types of uncertainty contained in a credal set, referred to as conflict (randomness, discord) and non-specificity, respectively, has been made by Yager (1983). The importance of this distinction was already emphasized by Kolmogorov (1965). These notions are in direct correspondence with what we call aleatoric and epistemic uncertainty.

The standard uncertainty measure in classical possibility theory (where uncertain information is simply represented in the form of subsets \(A \subseteq {\mathcal{Y}}\) of possible alternatives) is the Hartley measureFootnote 12 (Hartley 1928)

$$\begin{aligned} H(A) = \log ( |A|) \, , \end{aligned}$$
(25)

Just like the Shannon entropy (17), this measure can be justified axiomaticallyFootnote 13.

Given the insight that conflict and non-specificity are two different, complementary sources of uncertainty, and standard (Shannon) entropy and (25) as well-established measures of these two types of uncertainty, a natural question in the context of credal sets is to ask for a generalized representation

$$\begin{aligned} {\text {U}}(Q) = {\text {AU}}(Q) + {\text {EU}}(Q) \, , \end{aligned}$$
(26)

where \({\text {U}}\) is a measure of total (aggregate) uncertainty, \({\text {AU}}\) a measure of aleatoric uncertainty (conflict, a generalization of the Shannon entropy), and \({\text {EU}}\) a measure of epistemic uncertainty (non-specificity, a generalization of the Hartely measure).

As for the non-specificity part in (26), the following generalization of the Hartley measure to the case of graded possibilities has been proposed by various authors (Abellan and Moral 2000):

$$\begin{aligned} {\text {GH}}(Q) {:}{=}\sum _{A \subseteq {\mathcal{Y}}} {\text {m}}_Q(A) \, \log (|A|) \, , \end{aligned}$$
(27)

where \({\text {m}}_Q: \, 2^{{\mathcal{Y}}} \longrightarrow [0,1]\) is the Möbius inverse of the capacity function \(\nu :\, 2^{{\mathcal{Y}}} \longrightarrow [0,1]\) defined by

$$\begin{aligned} \nu _Q(A) {:}{=}\inf _{q \in Q} q(A) \end{aligned}$$
(28)

for all \(A \subseteq {\mathcal{Y}}\), that is,

$$\begin{aligned} {\text {m}}_Q(A) = \sum _{B \subseteq A} (-1)^{|A \setminus B|} \nu _Q(B) \, . \end{aligned}$$

The measure (27) enjoys several desirable axiomatic properties, and its uniqueness was shown by Klir and Mariano (1987).

The generalization of the Shannon entropy H as a measure of conflict turned out to be more difficult. The upper and lower Shannon entropy play an important role in this regard:

$$\begin{aligned} H^*(Q) {:}{=}\max _{q \in Q} H(q) \, , \quad H_*(Q) {:}{=}\min _{q \in Q} H(q) \end{aligned}$$
(29)

Based on these measures, the following disaggregations of total uncertainty (26) have been proposed (Abellan et al. 2006):

$$\begin{aligned} H^*(Q)&= \big (H^*(Q) - {\text {GH}}(Q) \big ) + {\text {GH}}(Q) \end{aligned}$$
(30)
$$\begin{aligned} H^*(Q)&= H_*(Q) + \big (H^*(Q) - H_*(Q) \big ) \end{aligned}$$
(31)

In both cases, upper entropy serves as a measure of total uncertainty U(Q), which is again justified on an axiomatic basis. In the first case, the generalized Hartley measure is used for quantifying epistemic uncertainty, and aleatoric uncertainty is obtained as the difference between total and epistemic uncertainty. In the second case, epistemic uncertainty is specified in terms of the difference between upper and lower entropy.

4.6.2 Set-valued prediction

Credal classifiers are used for “indeterminate” prediction, that is, to produce reliable set-valued predictions that are likely to cover the true outcome. In this regard, the notion of dominance plays an important role: An outcome y dominates another outcome \(y'\) if y is more probable than \(y'\) for each distribution in the credal set, that is,

$$\begin{aligned} \gamma (y,y', {\varvec{x}}) {:}{=}\inf _{h \in C} \frac{p( y \, | \,{\varvec{x}}_q, h)}{p(y' \, | \,{\varvec{x}}_q, h)} > 1 \, , \end{aligned}$$
(32)

where C is a credal set of hypotheses, and \(p( y \, | \,{\varvec{x}}, h)\) the probability of the outcome y (for the query \({\varvec{x}}_q\)) under hypothesis h. The set-valued prediction for a query \({\varvec{x}}_q\) is then given by the set of all non-dominated outcomes \(y \in {\mathcal{Y}}\), i.e., those outcomes that are not dominated by any other outcome. Practically, the computations are based on the interval-valued probabilities \([\underline{p}(y \, | \,{\varvec{x}}_q), \overline{p}(y \, | \,{\varvec{x}}_q)]\) assigned to each class \(y \in {\mathcal{Y}}\), where

$$\begin{aligned} \underline{p}(y \, | \,{\varvec{x}}_q) {:}{=}\inf _{h \in C} p(y \, | \,{\varvec{x}}_q, h)\, , \quad \overline{p}(y \, | \,{\varvec{x}}_q) {:}{=}\sup _{h \in C} p(y \, | \,{\varvec{x}}_q, h) \, . \end{aligned}$$
(33)

Note that the upper probabilities \(\overline{p}(y \, | \,{\varvec{x}}_q)\) are very much in the spirit of the plausibility degrees (36).

Interestingly, uncertainty degrees can be derived on the basis of dominance degrees (32). For example, the following uncertainty score has been proposed by Antonucci et al. (2012) in the context of uncertainty sampling for active learning:

$$\begin{aligned} s({\varvec{x}}_q) {:}{=}- \max \big (\gamma (+1, -1, {\varvec{x}}_q ), \gamma (-1, +1, {\varvec{x}}_q ) \big ) \, . \end{aligned}$$
(34)

This degree is low if one of the outcomes strongly dominates the other one, but high if this is not the case. It is closely related to the degree of epistemic uncertainty in (39) to be discussed in Sect. 4.7, but also seems to capture some aleatoric uncertainty. In fact, \(s({\varvec{x}}_q)\) increases with both, the width of the interval \([\underline{p}(y \, | \,{\varvec{x}}_q) , \overline{p}(y \, | \,{\varvec{x}}_q)]\) as well as the closeness of its middle point to \(1/2\).

Lassiter (2020) discusses the modeling of “credal imprecision” by means of sets of measures in a critical way and contrasts this approach with hierarchical probabilistic (Bayesian) models, i.e., distributions of measures (see Fig. 10, right side). Among several arguments put forward against sets-of-measures models, one aspect is specifically interesting from a machine learning perspective, namely the problem that modeling a lack of knowledge in a set-based manner may hamper the possibility of inductive inference, up to a point where learning from empirical data is not possible any more. This is closely connected to the insight that learning from data necessarily requires an inductive bias, and learning without assumptions is not possible (Wolpert 1996). To illustrate the point, consider the case of complete ignorance as an example, which could be modeled by the credal set \(\mathbb {P}_{all}\) that contains all probability measures (on a hypothesis space) as candidate priors. Updating this set by pointwise conditioning on observed data will essentially reproduce the same set as a credal set of posteriors. Roughly speaking, this is because \(\mathbb {P}_{all}\) will contain very extreme priors, which will remain extreme even after updating with a sample of finite size. In other words, no finite sample is enough to annihilate a sufficiently extreme prior belief. Replacing \(\mathbb {P}_{all}\) by a proper subset of measures may avoid this problem, but any such choice (implying a sharp transition between possible and excluded priors) will appear somewhat arbitrary. According to Lassiter (2020), an appealing alternative is using a probability distribution instead of a set, like in hierarchical Bayesian modeling.

4.7 Reliable classification

To the best of our knowledge, Senge et al. (2014) were the first to explicitly motivate the distinction between aleatoric and epistemic uncertainty in a machine learning context. Their approach leverages the concept of normalized likelihood (cf. Sect.  A.4). Moreover, it combines set-based and distributional (probabilistic) inference, and thus can be positioned in-between version space learning and Bayesian inference as discussed in Sect. 3. Since the approach contains concepts that might be less known to a machine learning audience, our description is a bit more detailed than for the other methods discussed in this section.

Consider the simplest case of binary classification with classes \({\mathcal{Y}} {:}{=}\{-1, +1\}\), which suffices to explain the basic idea (and which has been generalized to the multi-class case by Nguyen et al. (2018)). Senge et al. (2014) focus on predictive uncertainty and derive degrees of uncertainty in a prediction in two steps, which we are going to discuss in turn:

  • First, given a query instance \({\varvec{x}}_q\), a degree of “plausibility” is derived for each candidate outcome \(y \in {\mathcal{Y}}\). These are degrees in the unit interval, but no probabilities. As they are not constrained by a total mass of 1, they are more apt at capturing a lack of knowledge, and hence epistemic uncertainty (cf. Sect. 3.3).

  • Second, degrees of aleatoric and epistemic uncertainty are derived from the degrees of plausibility.

4.7.1 Modeling the plausibility of predictions

Recall that, in version space learning, the plausibility of both hypotheses and outcomes are expressed in a purely bivalent way: according to (10), a hypotheses is either considered possible/plausible or not (\(\llbracket h \in {\mathcal{V}} \rrbracket\)), and an outcome y is either supported or not (\(\llbracket h({\varvec{x}}) = y \rrbracket\)). Senge et al. (2014) generalize both parts of the prediction from bivalent to graded plausibility and support: A hypothesis \(h \in {\mathcal{H}}\) has a degree of plausibility \(\pi _{{\mathcal{H}}}(h) \in [0,1]\), and the support of outcomes y is expressed in terms of probabilities \(h({\varvec{x}}) = p(y \, | \,{\varvec{x}}) \in [0,1]\).

More specifically, referring to the notion of normalized likelihood, a plausibility distribution on \({\mathcal{H}}\) (i.e., a plausibility for each hypothesis \(h \in {\mathcal{H}}\)) is defined as follows:

$$\begin{aligned} \pi _{{\mathcal{H}}}(h) {:}{=}\frac{L(h)}{\sup _{h' \in {\mathcal{H}}} L(h')} = \frac{L(h)}{L(h^{ml})} , \end{aligned}$$
(35)

where L(h) is the likelihood of the hypothesis h on the data \({\mathcal{D}}\) (i.e., the probability of \({\mathcal{D}}\) under this hypothesis), and \(h^{ml} \in {\mathcal{H}}\) is the maximum likelihood (ML) estimation. Thus, the plausibility of a hypothesis is in proportion to its likelihood,Footnote 14 with the ML estimate having the highest plausibility of 1.

The second step, both in version space and Bayesian learning, consists of translating uncertainty on \({\mathcal{H}}\) into uncertainty about the prediction for a query \({\varvec{x}}_q\). To this end, all predictions \(h({\varvec{x}}_{q})\) need to be aggregated, taking into account the plausibility of the hypotheses \(h \in {\mathcal{H}}\). Due to the problems of the averaging approach (14) in Bayesian inference, a generalization of the “existential” aggregation (10) used in version space learning is adopted:

$$\begin{aligned} \pi (+1 \, | \,{\varvec{x}}_{q}) {:}{=}\sup _{h \in {\mathcal{H}}} \min \big ( \pi _{{\mathcal{H}}}(h) , \pi (+1 \, | \,h, {\varvec{x}}_{q}) \big ) , \end{aligned}$$
(36)

where \(\pi (+1 \, | \,h, {\varvec{x}}_{q})\) is the degree of support of the positive class provided by h.Footnote 15 This measure of support, which generalizes the all-or-nothing support \(\llbracket h({\varvec{x}}) = y \rrbracket\) in (10), is defined as follows:

$$\begin{aligned} \pi (+1 \, | \,h, {\varvec{x}}_{q}) {:}{=}\max \big (2 h({\varvec{x}}_{q})-1, 0 \big ) \end{aligned}$$
(37)

Thus, the support is 0 as long as the probability predicted by h is \(\le 1/2\), and linearly increases afterward, reaching 1 for \(h({\varvec{x}}_{q})=1\). Recalling that h is a probabilistic classifier, this clearly makes sense, since values \(h({\varvec{x}}_{q}) \le 1/2\) are actually more in favor of the negative class, and therefore no evidence for the positive class. Also, as will be seen further below, this definition assures a maximal degree of aleatoric uncertainty in the case of full certainty about the uniform distribution \(h_{1/2}\), wich is a desirable property. Since the supremum operator in (36) can be seen as a generalized existential quantifier, the expression (36) can be read as follows: The class \(+1\) is plausible insofar there exists a hypothesis h that is plausible and that strongly supports \(+1\). Analogously, the plausibility for \(-1\) is defined as follows:

$$\begin{aligned} \pi (-1 \, | \,{\varvec{x}}_{q}) {:}{=}\sup _{h \in {\mathcal{H}}} \min \big ( \pi _{{\mathcal{H}}}(h) , \pi (-1 \, | \,h, {\varvec{x}}_{q}) \big ) , \end{aligned}$$
(38)

with \(\pi (-1 \, | \,h, {\varvec{x}}_{q}) = \max (1 - 2 h({\varvec{x}}_{q}), 0)\). An illustration of this kind of “max-min inference” and a discussion of how it relates to Bayesian inference (based on “sum-product aggregation”) can be found in Appendix B in the supplementary material.

4.7.2 From plausibility to aleatoric and epistemic uncertainty

Given the plausibilities \(\pi (+1) = \pi (+1 \, | \,{\varvec{x}}_{q})\) and \(\pi (-1) = \pi (-1\, | \,{\varvec{x}}_{q})\) of the positive and negative class, respectively, and having to make a prediction, one would naturally decide in favor of the more plausible class. Perhaps more interestingly, meaningful definitions of epistemic uncertainty \(u_e\) and aleatoric uncertainty \(u_a\) can be defined on the basis of the two degrees of plausibility:

$$\begin{aligned} u_e&{:}{=}\min \big ( \pi (+1 ) , \pi (-1) \big ) \nonumber \\ u_a&{:}{=}1 - \max \big ( \pi (+1) , \pi (-1) \big ) \end{aligned}$$
(39)

Thus, \(u_e\) is the degree to which both \(+1\) and \(-1\) are plausibleFootnote 16, and \(u_a\) the degree to which neither \(+1\) nor \(-1\) are plausible. Since these two degrees of uncertainty satisfy \(u_a + u_e \le 1\), the total uncertainty (aleatoric \(+\) epistemic) is upper-bounded by 1. Strictly speaking, since the degrees \(\pi (y)\) should be interpreted as upper bounds on plausibility, which decrease in the course of time (with increasing sample size), the uncertainty degrees (39) should be interpreted as bounds as well. For example, in the very beginning, when no or very little data has been observed, both outcomes are fully plausible (\(\pi (+1) = \pi (-1) = 1\)), hence \(u_e = 1\) and \(u_a = 0\). This does not mean, of course, that there is no aleatoric uncertainty involved. Instead, it is simply not yet known or, say, confirmed. Thus, \(u_a\) should be seen as a lower bound on the “true” aleatoric uncertainty. For instance, when y is the outcome of a fair coin toss, the aleatoric uncertainty will increase over time and reach 1 in the limit, while \(u_e\) will decrease and vanish at some point: Eventually, the learner will be fully aware of facing full aleatoric uncertainty.

More generally, the following special cases might be of interest:

  • Full epistemic uncertainty: \(u_e = 1\) requires the existence of at least two fully plausible hypotheses (i.e., both with the highest likelihood), the one fully supporting the positive and the other fully supporting the negative class. This situation is likely to occur (at least approximately) in the case of a small sample size, for which the likelihood is not very peaked.

  • No epistemic uncertainty: \(u_e = 0\) requires either \(\pi (+1) = 0\) or \(\pi (-1)=0\), which in turn means that \(h({\varvec{x}}_{q}) < 1/2\) for all hypotheses with non-zero plausibility, or \(h({\varvec{x}}_{q}) > 1/2\) for all these hypotheses. In other words, there is no disagreement about which of the two classes should be favored. Specifically, suppose that all plausible hypotheses agree on the same conditional probability distribution \(p(+1 \, | \,{\varvec{x}}) = \alpha\) and \(p(-1 \, | \,{\varvec{x}}) = 1-\alpha\), and let \(\beta = \max (\alpha , 1- \alpha )\). In this case, \(u_e = 0\), and the degree of aleatoric uncertainty \(u_a = 2(1- \beta )\) depends on how close \(\beta\) is to 1.

  • Full aleatoric uncertainty: This is a special case of the previous one, in which \(\beta = 1/2\). Indeed, \(u_a = 1\) means that all plausible hypotheses assign a probability of 1/2 to both classes. In other words, there is an agreement that the query instance is a boundary case.

  • No uncertainty: Again, this is a special case of the second one, with \(\beta = 1\). A clear preference (close to 1) in favor of one of the two classes means that all plausible hypotheses, i.e., all hypotheses with a high likelihood, provide full support to that class.

Although algorithmic aspects are not in the focus of this paper, it is worth to mention that the computation of (36), and likewise of (38), may become rather complex. In fact, the computation of the supremum comes down to solving an optimization problem, the complexity of which strongly depends on the hypothesis space \({\mathcal{H}}\).

4.8 Conformal prediction

Conformal prediction (Vovk et al. 2003; Shafer and Vovk 2008; Balasubramanian et al. 2014) is a framework for reliable prediction that is rooted in classical frequentist statistics, more specifically in hypothesis testing. Given a sequence of training observations and a new query \({\varvec{x}}_{N+1}\) (which we denoted by \({\varvec{x}}_{q}\) before) with unknown outcome \(y_{N+1}\),

$$\begin{aligned} ({\varvec{x}}_1, y_1), \, ({\varvec{x}}_2, y_2), \ldots , ({\varvec{x}}_N, y_N), \, ({\varvec{x}}_{N+1}, \bullet ) , \end{aligned}$$
(40)

the basic idea is to hypothetically replace \(\bullet\) by each candidate, i.e., to test the hypothesis \(y_{N+1} = y\) for all \(y \in {\mathcal{Y}}\). Only those outcomes y for which this hypothesis can be rejected at a predefined level of confidence are excluded, while those for which the hypothesis cannot be rejected are collected to form the prediction set or prediction region \(Y^\epsilon \subseteq {\mathcal{Y}}\). The construction of a set-valued prediction \(Y^\epsilon = Y^\epsilon ({\varvec{x}}_{n+1})\) that is guaranteed to cover the true outcome \(y_{N+1}\) with a given probability \(1- \epsilon\) (for example 95 %), instead of producing a point prediction \(\hat{y}_{N+1} \in {\mathcal{Y}}\), is the basic idea of conformal prediction. Here, \(\epsilon \in (0,1)\) is a pre-specified level of significance. In the case of classification, \(Y^\epsilon\) is a subset of the set of classes \({\mathcal{Y}} = \{ y_1, \ldots , y_K \}\), whereas in regression, a prediction region is commonly represented in terms of an interval.Footnote 17

Hypothesis testing is done in a nonparametric way: Consider any “nonconformity” function \(f: \, {\mathcal{X}} \times {\mathcal{Y}} \longrightarrow \mathbb {R}\) that assigns scores \(\alpha = f({\varvec{x}}, y)\) to input/output tuples; the latter can be interpreted as a measure of “strangeness” of the pattern \(({\varvec{x}}, y)\), i.e., the higher the score, the less the data point \(({\varvec{x}}, y)\) conforms to what one would expect to observe. Applying this function to the sequence (40), with a specific (though hypothetical) choice of \(y = y_{N+1}\), yields a sequence of scores

$$\begin{aligned} \alpha _1, \, \alpha _2, \ldots , \alpha _N , \, \alpha _{N+1} . \end{aligned}$$

Denote by \(\sigma\) the permutation of \(\{1, \ldots , N+1\}\) that sorts the scores in increasing order, i.e., such that \(\alpha _{\sigma (1)} \le \ldots \le \alpha _{\sigma (N+1)}\). Under the assumption that the hypothetical choice of \(y_{N+1}\) is in agreement with the true data-generating process, and that this process has the property of exchangeability (which is weaker than the assumption of independence and essentially means that the order of observations is irrelevant), every permutation \(\sigma\) has the same probability of occurrence. Consequently, the probability that \(\alpha _{N+1}\) is among the \(\epsilon\) % highest nonconformity scores should be low. This notion can be captured by the p-values associated with the candidate y, defined as

$$\begin{aligned} p(y) {:}{=}\frac{\# \{ i \, | \,\alpha _i \ge \alpha _{N+1} \}}{N+1} \end{aligned}$$

According to what we said, the probability that \(p(y) < \epsilon\) (i.e., \(\alpha _{N+1}\) is among the \(\epsilon\) % highest \(\alpha\)-values) is upper-bounded by \(\epsilon\). Thus, the hypothesis \(y_{N+1} = y\) can be rejected for those candidates y for which \(p(y) < \epsilon\).

Conformal prediction as outlined above realizes transductive inference, although inductive variants also exist (Papadopoulos 2008). The error bounds are valid and well calibrated by construction, regardless of the nonconformity function f. However, the choice of this function has an important influence on the efficiency of conformal prediction, that is, the size of prediction regions: The more suitable the nonconformity function is chosen, the smaller these sets will be.

Although conformal prediction is mainly concerned with constructing prediction regions, the scores produced in the course of this construction can also be used for quantifying uncertainty. In this regard, the notions of confidence and credibility have been introduced (Gammerman and Vovk 2002): Let \(p_1, \ldots , p_K\) denote the p-values that correspond, respectively, to the candidate outcomes \(y_1, \ldots , y_K\) in a classification problem. If a definite choice (point prediction) \(\hat{y}\) has to be made, it is natural to pick the \(y_i\) with the highest p-value. The value \(p_i\) itself is then a natural measure of credibility, since the larger (closer to 1) this value, the more likely the prediction is correct. Note that the value also corresponds to the largest significance level \(\epsilon\) for which the prediction region \(Y^\epsilon\) would be empty (since all candidates would be rejected). In other words, it is a degree to which \(y_i\) is indeed a plausible candidate that cannot be excluded. Another question one may ask is to what extent \(y_i\) is the unique candidate of that kind, i.e., to what extent other candidates can indeed be excluded. This can be quantified in terms of the greatest \(1 - \epsilon\) for which \(Y^\epsilon\) is the singleton set \(\{ y_i \}\), that is, 1 minus the second-highest p-value. Besides, other methods for quantifying the uncertainty of a point prediction in the context of conformal prediction have been proposed (Linusson et al. 2016).

With its main concern of constructing valid prediction regions, conformal prediction differs from most other machine learning methods, which produce point predictions \(y \in {\mathcal{Y}}\), whether equipped with a degree of uncertainty or not. In a sense, conformal prediction can even be seen as being orthogonal: It predefines the degree of uncertainty (level of confidence) and adjusts its prediction correspondingly, rather than the other way around.

4.9 Set-valued prediction based on utility maximization

In line with classical statistics, but unlike decision theory and machine learning, the setting of conformal prediction does not involve any notion of loss function. In this regard, it differs from methods for set-valued prediction based on utility maximization (or loss minimization), which are primarily used for multi-class classification problems. Similar to conformal prediction, such methods also return a set of classes when the classifier is too uncertain with respect to the class label to predict, but the interpretation of this set is different. Instead of returning a set that contains the true class label with high probability, sets that maximize a set-based utility score are sought. Besides, being based on the conditional distribution \(p(y\,|\,{\varvec{x}}_q)\) of the outcome y given a query \({\varvec{x}}_q\), most of these methods capture conditional uncertainty.

Let \(u(y,\widehat{Y})\) be a set-based utility score, where y denotes the ground truth outcome and \(\widehat{Y}\) the predicted set. Then, adopting a decision-theoretic perspective, the Bayes-optimal solution \(\widehat{Y}^{*}_u\) is found by maximizing the following objective:

$$\begin{aligned} \widehat{Y}^{*}_u({\varvec{x}}_q) = {\mathop {\hbox {argmax}}\limits _{\widehat{Y}\in 2^{{\mathcal{Y}}}}\setminus \{\emptyset \}} {\mathbf{E}}_{p(y\,|\,{\varvec{x}}_q)} \big ( u(y,\widehat{Y}) \big ) = {\mathop {\hbox {argmax}}\limits _{\widehat{Y}\in 2^{{\mathcal{Y}}}}\setminus \{\emptyset \}} \sum _{y \in {\mathcal{Y}}} u(y,\widehat{Y}) \, p(y\,|\,{\varvec{x}}_q) \,. \end{aligned}$$
(41)

Solving (41) as a brute-force search requires checking all subsets of \({\mathcal{Y}}\), resulting in an exponential time complexity. However, for many utility scores, the Bayes-optimal prediction can be found more efficiently. Various methods in this direction have been proposed under different names and qualifications of predictions, such as “indeterminate” (Zaffalon 2002), “credal” (Corani and Zaffalon 2008), “non-deterministic” (Del Coz et al. 2009), and “cautious” (Yang et al. 2017b). Although the methods typically differ in the exact shape of the utility function \(u: {\mathcal{Y}} \times 2^{{\mathcal{Y}}}\setminus \{\emptyset \} \longrightarrow [0,1]\), most functions are specific members of the following family:

$$\begin{aligned} u(y,\widehat{Y}) = \left\{ \begin{array}{cl} 0 &{}\quad \text{ if } y \notin \widehat{Y}\ \\ g(|\widehat{Y}|)&{}\quad \text{ if } y \in \widehat{Y}\end{array} \, , \right. \end{aligned}$$
(42)

where \(|\widehat{Y}|\) denotes the cardinality of the predicted set \(\widehat{Y}\). This family is characterized by a sequence \((g(1), \ldots ,g(K)) \in [0,1]^K\) with K the number of classes. Ideally, g should obey the following properties:

  1. (i)

    \(g(1) = 1\), i.e., the utility \(u(y,\widehat{Y})\) should be maximal when the classifier returns the true class label as a singleton set.

  2. (ii)

    g(s) should be non-increasing, i.e., the utility \(u(y,\widehat{Y})\) should be higher if the true class is contained in a smaller set of predicted classes.

  3. (iii)

    \(g(s) \ge 1/s\), i.e., the utility \(u(y,\widehat{Y})\) of predicting a set containing the true and \(s-1\) additional classes should not be lower than the expected utility of randomly guessing one of these s classes. This requirement formalizes the idea of risk-aversion: in the face of uncertainty, abstaining should be preferred to random guessing (Zaffalon et al. 2012).

Many existing set-based utility scores are recovered as special cases of (42), including the three classical measures from information retrieval discussed by Del Coz et al. (2009): precision with \(g_P(s) = 1/s\), recall with \(g_R(s)=1\), and the F\(_{1}\)-measure with \(g_{F1}(s) = 2/(1+s)\). Other utility functions with specific choices for g are studied in the literature on credal classification (Corani and Zaffalon 2008, 2009; Zaffalon et al. 2012; Yang et al. 2017b; Nguyen et al. 2018), such as

$$\begin{aligned} g_{\delta ,\gamma }(s) {:}{=}\frac{\delta }{s} - \frac{\gamma }{s^2} \,, \quad g_{\exp }(s) {:}{=}1- \exp {\left( -\frac{\delta }{s}\right) }, \quad g_{log}(s) {:}{=}\log \left( 1 + \frac{1}{s} \right) \,. \end{aligned}$$

Especially \(g_{\delta ,\gamma }(s)\) is commonly used in this community, where \(\delta\) and \(\gamma\) can only take certain values to guarantee that the utility is in the interval [0, 1]. Precision (here called discounted accuracy) corresponds to the case \((\delta ,\gamma )=(1,0)\). However, typical choices for \((\delta ,\gamma\)) are (1.6, 0.6) and (2.2, 1.2) (Nguyen et al. 2018), implementing the idea of risk aversion. The measure \(g_{\exp }(s)\) is an exponentiated version of precision, where the parameter \(\delta\) also defines the degree of risk aversion.

Another example appears in the literature on binary or multi-class classification with reject option (Herbei and Wegkamp 2006; Linusson et al. 2018; Ramaswamy et al. 2015). Here, the prediction can only be a singleton or the full set \({\mathcal{Y}}\) containing K classes. The first case typically gets a reward of one (if the predicted class is correct), while the second case should receive a lower reward, e.g. \(1-\alpha\). The latter corresponds to abstaining, i.e., not predicting any class label, and the (user-defined) parameter \(\alpha\) specifies a penalty for doing so, with the requirement \(0< \alpha < 1-1/K\) to be risk-averse.

Set-valued predictions are also considered in hierarchical multi-class classification, mostly in the form of internal nodes of the hierarchy (Freitas 2007; Rangwala and Naik 2017; Yang et al. 2017a). Compared to the “flat” multi-class case, the prediction space is thus restricted, because only sets of classes that correspond to nodes of the hierarchy can be returned as a prediction. Some of the above utility scores also appear here. For example, Yang et al. (2017a) evaluate various members of the \(u_{\delta ,\gamma }\) family in a framework where hierarchies are considered for computational reasons, while Oh (2017) optimizes recall by fixing \(|\widehat{Y}|\) as a user-defined parameter. Popular in hierarchical classification is the tree-distance loss, which could also be interpreted as a way of evaluating set-valued predictions (Bi and Kwok 2015). This loss is not a member of the family (42), however. Besides, it appears to be a less interesting loss function from the perspective of abstention in cases of uncertainty, since by minimizing the tree distance loss, the classifier will almost never predict leaf nodes of the hierarchy. Instead, it will often predict nodes close to the root of the hierarchy, which correspond to sets with many elements—a behavior that is unfavored if one wants to abstain only in cases of sufficient uncertainty.

Quite obviously, methods that maximize set-based utility scores are closely connected to the quantification of uncertainty, since the decision about a suitable set of predictions is necessarily derived from information of that kind. The overwhelming majority of the above-mentioned methods depart from conditional class probabilities \(p(y \, | \,{\varvec{x}}_q)\) that are estimated in a classical frequentist way, so that uncertainties in decisions are of aleatoric nature. Exceptions include (Yang et al. 2017a) and (Nguyen et al. 2018), who further explore ideas from imprecise probability theory and reliable classification to generate label sets that capture both aleatoric and epistemic uncertainty.

5 Discussion and conclusion

The importance to distinguish between different types of uncertainty has recently been recognized in machine learning. The goal of this paper was to sketch existing approaches in this direction, with an emphasis on the quantification of aleatoric and epistemic uncertainty about predictions in supervised learning. Looking at the problem from the perspective of the standard setting of supervised learning, it is natural to associate epistemic uncertainty with the lack of knowledge about the true (or Bayes-optimal) hypothesis \(h^*\) within the hypothesis space \({\mathcal{H}}\), i.e., the uncertainty that is in principle reducible by the learner and could get rid of by additional data. Aleatoric uncertainty, on the other hand, is the irreducible part of the uncertainty in a prediction, which is due to the inherently stochastic nature of the dependency between instances \({\varvec{x}}\) and outcomes y.

In a Bayesian setting, epistemic uncertainty is hence reflected by the (posterior) probability \(p(h \, | \,{\mathcal{D}})\) on \({\mathcal{H}}\): The less peaked this distribution, the less informed the learner is, and the higher its (epistemic) uncertainty. As we argued, however, important information about this uncertainty might get lost through Bayesian model averaging. In this regard, we also argued that a (graded) set-based representation of epistemic uncertainty could be a viable alternative to a probabilistic representation, especially due to the difficulty of representing ignorance with probability distributions. This idea is perhaps most explicit in version space learning, where epistemic uncertainty is in direct correspondence with the size of the version space. The approach by Senge et al. (2014) combines concepts from (generalized) version space learning and Bayesian inference.

There are also other methods for modeling uncertainty, such as conformal prediction, for which the role of aleatoric and epistemic uncertainty is not immediately clear. Currently, the field develops very dynamically and is far from being settled. New proposals for modeling and quantifying uncertainty appear on a regular basis, some of them rather ad hoc and others better justified. Eventually, it would be desirable to “derive” a measure of total uncertainty as well as its decomposition into aleatoric and epistemic parts from basic principles in a mathematically rigorous way, i.e., to develop a kind of axiomatic basis for such a decomposition, comparable to axiomatic justifications of the entropy measure (Csiszár 2008).

In addition to theoretical problems of this kind, there are also many open practical questions. This includes, for example, the question of how to perform an empirical evaluation of methods for quantifying uncertainty, whether aleatoric, epistemic, or total. In fact, unlike for the prediction of a target variable, the data does normally not contain information about any sort of “ground truth” uncertainty. What is often done, therefore, is to evaluate predicted uncertainties indirectly, that is, by assessing their usefulness for improved prediction and decision making. As an example, recall the idea of utility maximization discussed in Sect. 4.9, where the decision is a set-valued prediction. Another example is accuracy-rejection curves, which depict the accuracy of a predictor as a function of the percentage of rejections (Hühn and Hüllermeier 2009): A classifier, which is allowed to abstain on a certain percentage p of predictions, will predict on those \((1-p)\) % on which it feels most certain. Being able to quantify its own uncertainty well, it should improve its accuracy with increasing p, hence the accuracy-rejection curve should be monotone increasing (unlike a flat curve obtained for random abstention).

Most approaches so far neglect model uncertainty, in the sense of proceeding from the (perhaps implicit) assumption of a correctly specified hypothesis space \({\mathcal{H}}\), that is, the assumption that \({\mathcal{H}}\) contains a (probabilistic) predictor which is coherent with the data. In (Senge et al. 2014), for example, this assumption is reflected by the definition of plausibility in terms of normalized likelihood, which always ensures the existence of at least one fully plausible hypothesis—indeed, (35) is a measure of relative, not absolute plausibility. On the one side, it is true that model induction, like statistical inference in general, is not possible without underlying assumptions, and that conclusions drawn from data are always conditional to these assumptions. On the other side, misspecification of the model class is a common problem in practice, and should therefore not be ignored. In principle, conflict and inconsistency can be seen as another source of uncertainty, in addition to randomness and a lack of knowledge. Therefore, it would be useful to reflect this source of uncertainty as well (for example in the form of non-normalized plausibility functions \(\pi _{{\mathcal{H}}}\), in which the gap \(1- \sup _h \pi _{{\mathcal{H}}}(h)\) serves as a measure of inconsistency between \({\mathcal{H}}\) and the data \({\mathcal{D}}\), and hence as a measure of model uncertainty).

Related to this is the “closed world” assumption (Deng 2014), which is often violated in contemporary machine learning applications. Modeling uncertainty and allowing the learner to express ignorance is obviously important in scenarios where new classes may emerge in the course of time, which implies a change of the underlying data-generating process, or the learner might be confronted with out-of-distribution queries. Some first proposals for dealing with this case can already be found in the literature (DeVries and Taylor 2018; Lee et al. 2018b; Malinin and Gales 2018; Sensoy et al. 2018).

Finally, there are many ways in which other machine learning methodology may benefit from a proper quantification of uncertainty, and in which corresponding measures could be used for “uncertainty-informed” decisions. For example, Nguyen et al. (2019) take advantage of the distinction between epistemic and aleatoric uncertainty in active learning, arguing that the former is more relevant as a selection criterion for uncertainty sampling than the latter. Likewise, as already said, a suitable quantification of uncertainty can be very useful in set-valued prediction, where the learner is allowed to predict a subset of outcomes (and hence to partially abstain) in cases of uncertainty.