1 Introduction

Let \(\mathcal {X}\) be a set equipped with a binary relation \(\preceq\), for instance, some partial order. The general problem is as follows: For \(m \ge 2\) pairs \((x_1,z_1),\ldots , (x_m,z_m) \in \mathcal {X}\times \mathbb {R}\) and weights \(w_1,\ldots ,w_m > 0\), let

$$\begin{aligned} A(\varvec{z}) \ := \mathrm{arg\;min}_{\varvec{\mathrm f} \in \mathbb {R}^\mathrm{m}_{\downarrow , \varvec{\mathrm{x}}}} \sum _{j=1}^m w_j (z_j - f_j)^2 , \end{aligned}$$
(1)

where

$$\mathbb {R}^m_{\downarrow , \varvec{x}}\ := \ \{ \; \varvec{f}\in \mathbb {R}^m: x_i\preceq x_j \ \text {implies that} \ f_i\ge f_j\}.$$

Suppose that \(\varvec{z}^{(0)}, \varvec{z}^{(1)}, \ldots , \varvec{z}^{(n)}\) are vectors in \(\mathbb {R}^m\) such that for \(1 \le t \le n\), the two vectors \(\varvec{z}^{(t-1)}\) and \(\varvec{z}^{(t)}\) differ only in a few components, and our task is to compute all antitonic (i.e. monotone decreasing) approximations \(A(\varvec{z}^{(0)}), A(\varvec{z}^{(1)}), \ldots , A(\varvec{z}^{(n)})\). We show that \(A(\varvec{z}^{(t)})\) can be computed efficiently, provided we know already \(A(\varvec{z}^{(t-1)})\). Briefly speaking, this is achieved by noticing that \(A(\varvec{z}^{(t-1)})\) and \(A(\varvec{z}^{(t)})\) share some identical components, and that the remaining components of \(A(\varvec{z}^{(t)})\) can be determined directly from \(A(\varvec{z}^{(t-1)})\) and \(\varvec{z}^{(t)}\) with only a few operations.

The efficient computation of a sequence of antitonic approximations appears naturally in the context of isotonic distributional regression, see Henzi et al. (2021), Mösching and Dümbgen (2020) and Jordan et al. (2021). There, one observes random pairs \((X_1,Y_1),\) \((X_2,Y_2), \ldots , (X_n,Y_n)\) in \(\mathcal {X}\times \mathbb {R}\) such that, conditional on \((X_i)_{i=1}^n\), the random variables \(Y_1, Y_2, \ldots , Y_n\) are independent with distribution functions \(F_{X_1}, F_{X_2}, \ldots , F_{X_n}\), where \((F_x)_{x \in \mathcal {X}}\) is an unknown family of distribution functions. Then the goal is to estimate the latter family under the sole assumption that \(F_{x} \ge F_{x'}\) pointwise whenever \(x \preceq x'\). This notion of ordering of distributions is known as stochastic ordering, or first order stochastic dominance. This isotonic distributional regression leads to the aforementioned least squares problem, where \(x_1, \ldots , x_m\) denote the different elements of \(\{X_1,X_2,\ldots ,X_n\}\), and \(\varvec{z}^{(t)}\) has components

$$z_j^{(t)} \ := \ w_j^{-1} \sum _{i \,: \, X_i = x_j} 1_{[Y_i \le Y_{(t)}]}$$

with \(w_j := \#\{i \le n : X_i = x_j\}\), \(Y_{(0)}:=-\infty\) and \(Y_{(t)}\) is the t-th order statistic of the sample \(\{Y_1,Y_2,\ldots ,Y_n\}\).

Section 2 provides some facts about monotone least squares which are useful for the present task. For a complete account and derivations, we refer to Barlow et al. (1972) and Robertson et al. (1988). Then it is shown in Section 3 how to turn this into an efficient computation scheme in case of a total order \(\preceq\). Finally, we discuss the specific application to isotonic distributional regression, and provide numerical experiments which show that computation times of the naive approach are decreased substantially with our procedure.

2 Some Facts About Antitonic Least Squares Estimation

Since the sum on the right hand side of (1) is a strictly convex and coercive function of \(\varvec{f} \in \mathbb {R}^m\), and since \(\mathbb {R}^m_{\downarrow , \varvec{x}}\) is a closed and convex set, \(A(\varvec{z})\) is well-defined. It possesses several well-known characterizations, two of which are particularly useful for our considerations.

The first characterization uses local weighted averages. Let us first introduce some notations. In this article, upper, lower and level sets are seen as subsets of \(\{1, \ldots , m\}\) inheriting the structure of \((\mathcal {X},\preceq )\). More precisely, a set \(U \subset \{1, \ldots , m\}\) is an upper set if \(i \in U\) and \(x_i \preceq x_j\) imply that \(j \in U\). A set \(L \subset \{1, \ldots , m\}\) is a lower set if \(j \in L\) and \(x_i \preceq x_j\) imply that \(i \in L\). The families of all upper and all lower sets are denoted by \(\mathcal {U}\) and \(\mathcal {L}\), respectively. For a non-empty set \(S\subset \{1,\ldots ,m\}\), its weight and the weighted average of \(\varvec{z}\) over S are respectively defined as

$$w_S^{} \ := \ \sum _{j\in S} w_j \quad \text {and}\quad M_S^{}(\varvec{z}) \ := \ w_S^{-1}\sum _{j\in S} w_jz_j .$$

Characterization I

For any index \(1 \le j \le m\),

$$A_j(\varvec{z}) \ = \ \min _{U \in \mathcal {U}: \, j \in U} \, \max _{L \in \mathcal {L}: \, j \in L} \, M_{U \cap L}(\varvec{z}) \ = \ \max _{L \in \mathcal {L}: \, j \in L} \, \min _{U \in \mathcal {U}: \, j \in U} \, M_{L \cap U}(\varvec{z}) .$$

For all vectors \(\varvec{f}\in \mathbb {R}^m\), numbers \(\xi \in \mathbb {R}\) and relations \(\ltimes\) in \(\{<,\le ,=,\ge ,>\}\), let

$$[ \ \varvec{f} \ltimes \xi ] \ := \ \{j \in \{1, \ldots , m\}: f_j \ltimes \xi \}.$$

For example, the family of sets \([ \ \varvec{f} = \xi ]\) indexed by \(\xi \in \{ \; f_1, \ldots ,f_m\}\) yields a partition of \(\{1, \ldots , m\}\) such that two indices i and j belong to the same block if and only if \(f_i = f_j\). In case of \(\varvec{f} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\), \([ \ \varvec{f} < \xi ]\) and \([ \ \varvec{f} \le \xi ]\) are upper sets, whereas \([ \ \varvec{f} > \xi ]\) and \([ \ \varvec{f} \ge \xi ]\) are lower sets.

Characterization II

A vector \(\varvec{f} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\) equals \(A(\varvec{z})\) if and only if for any number \(\xi \in \{ \; f_1,\ldots ,f_m\}\),

$$\begin{aligned} M_{U \cap [ \ \varvec{f} = \xi ]}(\varvec{z}) &\ge \ \xi \quad \text {for }U \in \mathcal {U}\text { such that} \ U\cap [ \ \varvec{f} = \xi ] \ne \emptyset , \end{aligned}$$
(2)
$$\begin{aligned} M_{L \cap [ \ \varvec{f} = \xi ]}(\varvec{z}) &\le \ \xi \quad \text {for }L \in \mathcal {L}\text { such that} \ L\cap [ \ \varvec{f} = \xi ] \ne \emptyset . \end{aligned}$$
(3)

In particular, \(\xi = M_{[ \ \varvec{f} = \xi ]}(\varvec{z})\).

One possible reference for Characterizations I and II is Domínguez-Menchero and González-Rodríguez (2007). They treat the case of a quasi-order \(\preceq\) and more general target functions \(\sum _{j=1}^m h_j( \; f_j)\) to be minimized over \(\varvec{f} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\). For the present setting with an arbitrary binary relation \(\preceq\) and weighted least squares, a relatively short and self-contained derivation of these two characterizations is available from the authors upon request.

The next lemma summarizes some facts about changes in \(A(\varvec{z})\) if some components of \(\varvec{z}\) are increased.

Lemma 2.1

Let \(\varvec{z}, \varvec{\tilde{z}} \in \mathbb {R}^m\) such that \(\varvec{\tilde{z}} \ge \varvec{z}\) component-wise. Then the following conclusions hold true for \(\varvec{f} := A(\varvec{z})\), \(\varvec{\tilde{f}}:=A(\varvec{\tilde{z}})\) and \(K := \{k : \tilde{z}_k > z_k\}\):

  1. (i)

    \(\varvec{f} \le \varvec{\tilde{f}}\) component-wise.

  2. (ii)

    \(\tilde{f}_i = f_i\) whenever \(f_i < \min _{k \in K} f_k\).

  3. (iii)

    \(\tilde{f}_i = f_i\) whenever \(\tilde{f}_i > \max _{k \in K} \tilde{f}_k\).

  4. (iv)

    \(\tilde{f}_i = \tilde{f}_j\) whenever \(f_i = f_j\) and \(x_i, x_j \preceq x_k\) for all \(k \in K\).

Figure 1 illustrates the statements of Lemma 2.1 on \(\mathbb {R}^2\) equipped with the componentwise order in case of \(K = \{j_o\}\). The colored areas show level sets of a hypothetical antitonic regression \(\varvec{f}\), and \(x_{j_o}\) is the point where \(\tilde{z}_{j_o} > z_{j_o}\). By part (ii) of Lemma 2.1, we know that \(\tilde{f}_i = f_i\) if \(f_i < f_{j_o}\), so the values of \(\varvec{f}\) and \(\varvec{\tilde{f}}\) are equal on the orange and yellow regions in the top right corner, which is indicated by saturated colors. Furthermore, when passing from \(\varvec{z}\) to \(\varvec{\tilde{z}}\), the slightly transparent pink, blue and green level sets on the bottom left (including the point \(x_{j_o}\)) can only be merged, but never be split. This follows from part (iv) of Lemma 2.1. Finally, for all points in the faded pink, blue and green areas, there is no statement about the behavior of the antitonic regression when passing from \(\varvec{z}\) to \(\varvec{\tilde{z}}\).

Fig. 1
figure 1

Illustration of the statements of Lemma 2.1 on \(\mathbb {R}^2\)

Proof of Lemma 2.1

Part (i) is a direct consequence of Characterization I.

As to part (ii), if \(f_i < \min _{k \in K} f_k\), then \(K \subset [ \ \varvec{f} > f_i]\), whence

$$\begin{aligned} \tilde{f}_i &= \ \max _{L \in \mathcal {L}: \, i \in L} \, \min _{U \in \mathcal {U}: \, i \in U} \, M_{L\cap U}^{}(\varvec{\tilde{z}})&(\text {Char.~I}) \\&\le \ \max _{L \in \mathcal {L}: \, i \in L} \, M_{L \cap [ \ \varvec{f} \le \; f_i]}^{}(\varvec{\tilde{z}})&(i \in [ \ \varvec{f} \le \; f_i] \in \mathcal {U}) \\&= \ \max _{L \in \mathcal {L}: \, i \in L} \, M_{L \cap [ \ \varvec{f} \le \; f_i]}^{}(\varvec{z})&(K \cap [ \ \varvec{f} \le \; f_i] = \emptyset ) \\&= \ \max _{L \in \mathcal {L}: \, i \in L} \sum _{\xi \le \; f_i: \, L \cap [ \ \varvec{f} = \xi ] \ne \emptyset } \frac{w_{L \cap [ \ \varvec{f} = \xi ]}}{w_{L\cap [ \ \varvec{f} \le \; f_i]}} \, M_{L \cap [ \ \varvec{f} = \xi ]}^{}(\varvec{z}) \\&\le \ \max _{L \in \mathcal {L}: \, i \in L} \sum _{\xi \le \; f_i: \, L \cap [ \ \varvec{f} = \xi ] \ne \emptyset } \frac{w_{L \cap [ \ \varvec{f} = \xi ]}}{w_{L\cap [ \ \varvec{f} \le \; f_i]}} \, \xi&(\text {Char.~II}) \\&\le \ f_i . \end{aligned}$$

This inequality and part (i) show that \(\tilde{f}_i = f_i\).

Part (iii) is proved analogously. If \(\tilde{f}_i > \max _{k \in K} \tilde{f}_k\), then \(K \subset [ \; \varvec{\tilde{f}} < \tilde{f}_i]\), whence

$$\begin{aligned} f_i &= \ \min _{U \in \mathcal {U}: \, i \in U} \, \max _{L \in \mathcal {L}: \, i \in L} \, M_{U\cap L}^{}(\varvec{z})&(\text {Char.~I}) \\&\ge \ \min _{U \in \mathcal {U}: \, i \in U} \, M_{U \cap [ \; \varvec{\tilde{f}} \ge \tilde{f}_i]}^{}(\varvec{z})&(i \in [ \; \varvec{\tilde{f}} \ge \tilde{f}_i] \in \mathcal {L}) \\&= \ \min _{U \in \mathcal {U}: \, i \in U} \, M_{U \cap [ \; \varvec{\tilde{f}} \ge \tilde{f}_i]}^{}(\varvec{\tilde{z}})&(K \cap [ \; \varvec{\tilde{f}} \ge \tilde{f}_i] = \emptyset ) \\&= \ \min _{U \in \mathcal {U}: \, i \in U} \sum _{\xi \ge \tilde{f}_i: \, U \cap [ \; \varvec{\tilde{f}} = \xi ] \ne \emptyset } \frac{w_{U \cap [ \; \varvec{\tilde{f}} = \xi ]}}{w_{U\cap [ \; \varvec{\tilde{f}} \ge \tilde{f}_i]}} \, M_{U \cap [ \; \varvec{\tilde{f}} = \xi ]}^{}(\varvec{\tilde{z}}) \\&\ge \ \min _{U \in \mathcal {U}: \, i \in U} \sum _{\xi \ge \tilde{f}_i: \, U \cap [ \; \varvec{\tilde{f}} = \xi ] \ne \emptyset } \frac{w_{U \cap [ \; \varvec{\tilde{f}} = \xi ]}}{w_{U\cap [ \; \varvec{\tilde{f}} \le \tilde{f}_i]}} \, \xi&(\text {Char.~II}) \\&\ge \ \tilde{f}_i . \end{aligned}$$

This inequality and part (i) show that \(\tilde{f}_i = f_i\).

Part (iv) follows directly from parts (i) and (iii). Let i and j be different indices such that \(f_i = f_j\) and \(x_i, x_j \preceq x_k\) for all \(k \in K\). It follows from \(\varvec{\tilde{f}} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\) that \(\tilde{f}_i, \tilde{f}_j \ge \max _{k \in K} \tilde{f}_k\). Consequently, if \(\tilde{f}_j > \tilde{f}_i\), then \(\tilde{f}_j > \max _{k \in K} \tilde{f}_k\), so parts (i) and (iii) would imply that

$$\tilde{f}_i \ \ge \ f_i = f_j \ = \ \tilde{f}_j ,$$

contradicting \(\tilde{f}_j > \tilde{f}_i\).

The Special Case of a Total Order

If one replaces the binary relation \(\preceq\) by a total order \(\le\) on \(\mathcal {X}\), as for example in the case of the usual total order on a subset of \(\mathbb {R}\), the conclusions of Lemma 2.1 take a simpler form. In case of a total order, we assume that the covariates are ordered as follows

$$x_1 \ \le \ x_2 \ \le \ \cdots \ \le \ x_m ,$$

so that \(i\le j\) implies that \(x_i\le x_j\), while \(x_i < x_j\) implies that \(i < j\).

Corollary 2.2

Let \(\varvec{z}, \varvec{\tilde{z}} \in \mathbb {R}^m\) such that \(\varvec{z} \le \varvec{\tilde{z}}\) component-wise. Then the following conclusions hold true for \(\varvec{f} := A(\varvec{z})\) and \(\varvec{\tilde{f}}:=A(\varvec{\tilde{z}})\):

  1. (i)

    \(\varvec{f} \le \varvec{\tilde{f}}\) component-wise.

  2. (ii)

    Let \(k \in \{1,\ldots ,m-1\}\) such that \(f_k > f_{k+1}\) and \((\tilde{z}_j)_{j> k} = (z_j)_{j > k}\). Then

    $$( \; \tilde{f}_j)_{j> k} \ = \ ( \; f_j)_{j > k} .$$
  3. (iii)

    Let \(k \in \{2,\ldots ,m\}\) such that \(\tilde{f}_{k-1} > \tilde{f}_k\) and \((\tilde{z}_j)_{j< k} = (z_j)_{j < k}\). Then

    $$( \; \tilde{f}_j)_{j< k} \ = \ ( \; f_j)_{j < k} .$$
  4. (iv)

    Let \(k \in \{2,\ldots ,m\}\) such that \((\tilde{z}_j)_{j< k} = (z_j)_{j < k}\). Then

    $$\{j< k : \tilde{f}_j> \tilde{f}_{j+1}\} \ \subset \ \{j < k : f_j > f_{j+1}\} .$$

3 A Sequential Algorithm for Total Orders

Lemma 2.1 is potentially useful to accelerate algorithms for isotonic distributional regression with arbitrary partial orders, possibly in conjunction with the recursive partitioning algorithm by Luss and Rosset (2014), but this will require additional research. Now we focus on improvements of the well-known pool-adjacent-violators algorithm (PAVA) for a total order.

3.1 General Considerations

In what follows, we assume that \(x_1< \cdots < x_m\), so \(\mathbb {R}^m_{\downarrow , \varvec{x}}\) coincides with \(\mathbb {R}^m_{\downarrow }= \bigl \{ \varvec{f} \in \mathbb {R}^m : f_1 \ge \cdots \ge f_m\}\). To understand the different variants of the PAVA, let us recall two basic facts about \(A(\varvec{z})\). Let \(\mathcal {P}= (P_1,\ldots ,P_d)\) be a partition of \(\{1,\ldots ,m\}\) into blocks \(P_s = \{b_{s-1}+1,\ldots ,b_s\}\), where \(0 = b_0< b_1< \cdots < b_d = m\), and let \(\mathbb {R}^m_{\mathcal {P}}\) be the set of vectors \(\varvec{f} \in \mathbb {R}^m\) such that \(f_i = f_j\) whenever ij belong to the same block of \(\mathcal {P}\).

Fact 1

Let \(r_1> \cdots > r_d\) be the sorted elements of \(\{A_i(\varvec{z}) : 1 \le i \le m\}\), and let \(\mathcal {P}\) consist of the blocks \(P_s = \{i : A_i(\varvec{z}) = r_s\}\). Then \(r_s = M_{P_s}^{}(\varvec{z})\) for \(1 \le s \le d\).

Fact 2

Suppose that \(A(\varvec{z}) \in \mathbb {R}^m_{\mathcal {P}}\) for a given partition \(\mathcal {P}\) with \(d \ge 2\) blocks. If \(s \in \{1,\ldots ,d-1\}\) such that \(M_{P_s}^{}(\varvec{z}) \le M_{P_{s+1}}^{}(\varvec{z})\), then \(A_i(\varvec{z})\) is constant in \(i \in P_s \cup P_{s+1}\). That means, one may replace \(\mathcal {P}\) with a coarser partition by pooling \(P_s\) and \(P_{s+1}\) and still, \(A(\varvec{z}) \in \mathbb {R}^m_{\mathcal {P}}\).

Fact 1 is a direct consequence of Characterization II. To verify Fact 2, suppose that \(\varvec{f} \in \mathbb {R}^m_{\downarrow }\cap \mathbb {R}^m_{\mathcal {P}}\) such that \(f_i = r_s\) for \(i \in P_s\), \(f_i = r_{s+1}\) for \(i \in P_{s+1}\), and \(r_s > r_{s+1}\). Now we show that \(\varvec{f}\) cannot be equal to \(A(\varvec{z})\). For \(t \ge 0\) let \(\varvec{f}(t) \in \mathbb {R}^m_{\mathcal {P}}\) be given by

$$f_i(t) \ = \ f_i - 1_{[i \in P_s]} t w_{P_s}^{-1} + 1_{[i \in P_{s+1}]} t w_{P_{s+1}}^{-1} .$$

Then \(\varvec{f}(0) = \varvec{f}\), and \(\varvec{f}(t) \in \mathbb {R}^m_{\downarrow }\) if \(t \le (r_s - r_{s+1})/(w_{P_{s+1}}^{-1} + w_{P_s}^{-1})\). But

$$\frac{d}{dt} \Big |_{t = 0} \sum _{i=1}^m w_i( \; f_i(t) - z_i)^2 \ = \ 2 (r_{s+1} - r_s) - 2 \bigl ( M_{P_{s+1}}^{}(\varvec{z}) - M_{P_s}^{}(\varvec{z}) \bigr ) \ < \ 0 ,$$

so for sufficiently small \(t>0\), \(\varvec{f}(t)\in \mathbb {R}^m_{\downarrow }\) and is superior to \(\varvec{f}(0)\). Hence \(\varvec{f} \ne A(\varvec{z})\).

Facts 1 and 2 indicate already a general PAV strategy to compute \(A(\varvec{z})\). One starts with the finest partition \(\mathcal {P}= (\{1\}, \ldots , \{m\})\). As long as \(\mathcal {P}\) contains two neighboring blocks \(P_s\) and \(P_{s+1}\) such that \(M_{P_s}^{}(\varvec{z}) \ge M_{P_{s+1}}^{}(\varvec{z})\), the partition \(\mathcal {P}\) is coarsened by replacing \(P_s\) and \(P_{s+1}\) with the block \(P_s \cup P_{s+1}\).

Standard PAVA

Specifically, one works with three tuples: \(\mathcal {P}= (P_1,\ldots ,P_d)\) is a partition of \(\{1,\ldots ,b_d\}\) into blocks \(P_s = \{b_{s-1}+1,\ldots ,b_s\}\), where \(0 = b_0< b_1< \cdots < b_d\). The number \(b_d\) is running from 1 to m, and the number \(d \ge 1\) changes during the algorithm, too. The tuples \(\mathcal {W}= (W_1,\ldots ,W_d)\) and \(\mathcal {M}= (M_1,\ldots ,M_d)\) contain the corresponding weights \(W_s = w_{P_s}^{}\) and weighted means \(M_s = M_{P_s}^{}(\varvec{z})\). Before increasing \(b_d\), the tuples \(\mathcal {P}\), \(\mathcal {W}\) and \(\mathcal {M}\) describe the minimizer of \(\sum _{i=1}^{b_d} w_i ( \; f_i - z_i)^2\) over all \(\varvec{f} \in \mathbb {R}^{b_d}_{\downarrow }\). Here is the complete algorithm:

Initialization: We set \(\mathcal {P}\leftarrow (\{1\})\), \(\mathcal {W}\leftarrow (w_1)\), \(\mathcal {M}\leftarrow (z_1)\), and \(d \leftarrow 1\).

Induction step: If \(b_d < m\), we add a new block by setting

$$\mathcal {P}\ \leftarrow \ (\mathcal {P},\{b_d+1\}) , \quad \mathcal {W}\ \leftarrow \ (\mathcal {W},w_{b_d+1}^{}) , \quad \mathcal {M}\ \leftarrow \ (\mathcal {M}, z_{b_d+1}^{}) ,$$

and \(d \leftarrow d+1\). Then, while \(d > 1\) and \(M_{d-1} \le M_d\), we pool the “violators” \(P_{d-1}\) and \(P_d\) by setting

$$\begin{aligned} \mathcal {P}&\leftarrow \ \bigl ( (P_j)_{j< d-1}, P_{d-1} \cup P_d \bigr ) , \\ \mathcal {M} & \leftarrow \ \Bigl ( (W_j)_{j< d-1}, \frac{W_{d-1}M_{d-1} + W_dM_d}{W_{d-1} + W_d} \Bigr ) , \\ \mathcal {W}&\leftarrow \ \bigl ( (W_j)_{j < d-1}, W_{d-1} + W_d \bigr ) , \end{aligned}$$

and \(d \leftarrow d-1\).

Finalization: Eventually, \(\mathcal {P}\) is a partition of \(\{1,\ldots ,m\}\) into blocks such that \(M_1> \cdots > M_d\) and

$$A_j(\varvec{z}) \ = \ M_s \quad \text {for} \ j \in P_s \ \text {and}\ 1 \le s \le d .$$

Modified PAVA

In our specific applications of the PAVA, we are dealing with vectors \(\varvec{z}\) containing larger blocks \(\{a,\ldots ,b\}\) on which \(i \mapsto z_i\) is constant. Indeed, in regression settings with continuously distributed covariates and responses, \(\varvec{z}\) will always be a \(\{0,1\}\)-valued vector. Then it is worthwhile to utilize fact 2 and modify the initialization as well as the very beginning of the induction step as follows:

For the initialization, we determine the largest index \(b_1\) such that \(z_1 = \cdots = z_{b_1}\) and the corresponding weight \(W_{P_1}\) with \(P_1 = \{1,\ldots ,b_1\}\). Then we set \(\mathcal {P}\leftarrow (P_1)\), \(\mathcal {W}\leftarrow (w_{P_1}^{})\) and \(\mathcal {M}\leftarrow (z_{b_1}^{})\), where \(P_1 = \{1,\ldots ,b_1\}\).

At the beginning of the induction step, we determine the largest index \(b_{d+1} > b_d\) such that \(z_{b_d+1} = \cdots = z_{b_{d+1}}\) and the corresponding weight \(W_{P_{d+1}}^{}\) with \(P_{d+1} = \{b_d+1,\ldots ,b_{d+1}\}\). Then we set \(\mathcal {P}\leftarrow (\mathcal {P},P_{d+1})\), \(\mathcal {W}\leftarrow (\mathcal {W},W_{P_{d+1}})\), \(\mathcal {M}\leftarrow (\mathcal {M}, z_{b_{d+1}}^{})\), and \(d \leftarrow d+1\).

Abridged PAVA

Suppose that we have computed \(A(\varvec{z})\) with corresponding tuples \(\mathcal {P}= (P_1,\ldots ,P_d)\), \(\mathcal {W}= (W_1,\ldots ,W_d)\) and \(\mathcal {M}= (M_1,\ldots ,M_d)\) via the PAVA. Now let \(\varvec{\tilde{z}} \in \mathbb {R}^m\) such that \(\tilde{z}_{j_o} > z_{j_o}\) for one index \(j_o \in \{1,\ldots ,m\}\), while \((\tilde{z}_j)_{j \ne j_o} = (z_j)_{j \ne j_o}\). Let \(j_o \in P_{s_o}\) with \(s_o \in \{1,\ldots ,d\}\). By parts (ii) and (iv) of Corollary 2.2, the partition corresponding to \(A(\varvec{\tilde{z}})\) will be a coarsening of the partition with the following blocks:

$$P_s \ \ \text {for} \ 1 \le s< s_o , \quad \{b_{s_o-1} + 1,\ldots ,j_o\} , \quad \{j\} \ \ \text {for} \ j_o< j \le b_{s_o} , \quad P_s \ \ \text {for} \ s_o < s \le d .$$

Moreover, \(A_i(\varvec{\tilde{z}}) = A_i(\varvec{z})\) for \(i > b_{s_o}\). This allows us to compute \(A(\varvec{\tilde{z}})\) as follows, keeping copies of the auxiliary objects for \(A(\varvec{z})\) and indicating this with a superscript \(\varvec{z}\):

Initialization: We determine \(s_o \in \{1,\ldots ,d^{\varvec{z}}\}\) such that \(j_o \in P_{s_o}^{\varvec{z}}\). Then we set

$$\begin{aligned} \mathcal {P}&\leftarrow \ \bigl ( (P_s^{\varvec{z}})_{s< s_o}^{}, \{b_{s_o-1}^{\varvec{z}}+1,\ldots ,j_o\} \bigr ) , \\ \mathcal {M}&\leftarrow \ \bigl ( (M_s^{\varvec{z}})_{s< s_o}^{}, M_{P_{s_o}}^{}(\varvec{\tilde{z}}) \bigr ) , \\ \mathcal {W}&\leftarrow \ \bigl ( (W_s^{\varvec{z}})_{s < s_o}^{}, w_{P_{s_o}}^{} \bigr ) \end{aligned}$$

and \(d \leftarrow s_o\). While \(d > 1\) and \(M_{d-1} \le M_d\), we pool the violators \(P_{d-1}\) and \(P_d\) as in the induction step of PAVA. (This initialization is justified by part (iv) of Corollary 2.2.)

Induction step: If \(j_o < b_{s_o}^{\varvec{z}}\), we run the induction step of PAVA for \(b_d\) running from \(j_o+1\) to \(b_{s_o}^{\varvec{z}}\) with \(\varvec{\tilde{z}}\) in place of \(\varvec{z}\).

Finalization: If \(b_{s_o}^{\varvec{z}} < m\), we set

$$\begin{aligned} \mathcal {P}&\leftarrow \ \bigl ( \mathcal {P}, (P_s^{\varvec{z}})_{s_o< s \le d^{\varvec{z}}} \bigr ) , \\ \mathcal {M}&\leftarrow \ \bigl ( \mathcal {M}, (M_s^{\varvec{z}})_{s_o< s \le d^{\varvec{z}}} \bigr ) , \\ \mathcal {W}&\leftarrow \ \bigl ( \mathcal {W}, (W_s^{\varvec{z}})_{s_o < s \le d^{\varvec{z}}} \bigr ) \end{aligned}$$

and \(d \leftarrow d + d^{\varvec{z}} - s_o\). The new pair \((\mathcal {P},\mathcal {M})\) yields the vector \(A(\varvec{\tilde{z}})\). This finalization is justified by part (ii) of Corollary 2.2.

Computational Complexity

It directly follows from the algorithmic description that when \(A(\varvec{z})\) is available, the abridged PAVA for computing \(A(\varvec{\tilde{z}})\) requires not more operations than the standard PAVA. Its computational complexity is therefore at most of order O(m) if \(x_1, \dots , x_m\) are already sorted. More precisely, the number of averaging operations in the abridged PAVA is bounded from above by \(d^{\varvec{z}} + (b_{s_o}^{\varvec{z}} - b_{s_o - 1}^{\varvec{z}})\), where \(d^{\varvec{z}}\) is the partition size of the antitonic regression \(A(\varvec{z})\) and \(b_{s_o}^{\varvec{z}} - b_{s_o - 1}^{\varvec{z}}\) is the number of elements in the set \(P_{s_o}^{\varvec{z}}\) containing the index \(j_o\) where the value of \(\varvec{z}\) changes. In many practical applications this number is much smaller than m, but in the worst case it may equal exactly m; for example, let \(w_i = 1\) and \(z_i = m - i\) for \(i = 1, \dots , m\), \(j_o = m\), and \(\tilde{z}_m = m^2\).

Numerical Example

We illustrate the previous procedures with two vectors \(\varvec{z}, \varvec{\tilde{z}} \in \mathbb {R}^9\) and \(\varvec{w} = (1)_{j=1}^9\). Table 1 shows the main steps of the PAVA for \(\varvec{z}\). The first line shows the components of \(\varvec{z}\), the other lines contain the current candidate for \(( \; f_j)_{j=1}^{b_d}\), where \(\varvec{f} = A(\varvec{z})\) eventually, and the current partition \(\mathcal {P}\) is indicated by extra vertical bars. Table 2 shows the abridged PAVA for two different vectors \(\varvec{\tilde{z}}\).

Table 1 Running the PAVA for a vector \(\varvec{z}\)
Table 2 Running the abridged PAVA for two vectors \(\varvec{\tilde{z}} \approx \varvec{z}\)

3.2 Application to Isotonic Distributional Regression

Now we consider a regression framework similar to the one discussed in Mösching and Dümbgen (2020), Henzi et al. (2021) and Jordan et al. (2021). We observe pairs \((X_1,Y_1),\) \((X_2,Y_2),\ldots ,(X_n,Y_n)\) consisting of numbers \(X_i \in \mathcal {X}\) (covariate) and \(Y_i \in \mathbb {R}\) (response), where \(\mathcal {X}\) is a given real interval. Conditional on \((X_i)_{i=1}^n\), the observations \(Y_1, Y_2, \ldots , Y_n\) are viewed as independent random variables such that for \(x \in \mathcal {X}\) and \(y \in \mathbb {R}\),

$$\mathop {I\!P}\nolimits (Y_i \le y) \ = \ F_x(y) \quad \text {if} \ X_i = x .$$

Here \((F_x)_{x \in \mathcal {X}}\) is an unknown family of distribution functions. We only assume that \(F_x(y)\) is non-increasing in \(x \in \mathcal {X}\) for any fixed \(y \in \mathbb {R}\). That means, the family \((F_x)_{x \in \mathcal {X}}\) is increasing with respect to stochastic order.

Let \(x_1< x_2< \cdots < x_m\) be the elements of \(\{X_1,X_2,\ldots ,X_n\}\), and let

$$w_j \ := \ \#\{i: \, X_i = x_j\} , \quad 1 \le j \le m .$$

Then one can estimate \(\varvec{F}(y) := (F_{x_j}(y))_{j=1}^m\) by

$$\widehat{\varvec{F}}(y) \ := \ A(\varvec{z}(y)) ,$$

where \(\varvec{z}(y)\) has components

$$z_j(y) \ := \ w_j^{-1} \sum _{i: \, X_i = x_j} 1_{[Y_i \le y]} , \quad 1 \le j \le m .$$

Suppose we have rearranged the observations such that \(Y_1 \le Y_2 \le \cdots \le Y_n\). Let \(\varvec{z}^{(0)} := \varvec{0}\) and

$$\varvec{z}^{(t)} \ := \ \Bigl ( w_j^{-1} \sum _{i \le t: \, X_i = x_j} 1_{[Y_i \le Y_t]} \Bigr )_{j=1}^m$$

for \(1 \le t \le n\). Note that \(\varvec{z}^{(t-1)}\) and \(\varvec{z}^{(t)}\) differ in precisely one component, and that

$$\varvec{z}(y) \ = \ {\left\{ \begin{array}{ll} \varvec{z}^{(0)} &{} \text {if} \ y< Y_1 , \\ \varvec{z}^{(t)} &{} \text {if} \ Y_t \le y< Y_{t+1}, \ 1 \le t < n , \\ \varvec{z}^{(n)} &{} \text {if} \ y \ge Y_n . \end{array}\right. }$$

Thus it suffices to compute \(A(\varvec{z}^{(t)})\) for \(t = 0,1,\ldots ,n\). But \(A(\varvec{z}^{(0)}) = \varvec{0}\), \(A(\varvec{z}^{(n)}) = \varvec{1}\), and for \(1 \le t < n\), one may apply the abridged PAVA to the vectors \(\varvec{z} := \varvec{z}^{(t-1)}\) and \(\varvec{\tilde{z}} := \varvec{z}^{(t)}\). This leads to an efficient algorithm to compute all vectors \(A(\varvec{z}^{(t)})\), \(0 \le t \le n\), if implemented properly.

Numerical Experiment 1

We generated data sets with \(n = 1000\) independent observation pairs \((X_i,Y_i)\), \(1 \le i \le n\), where \(X_i\) is uniformly distributed on [0, 10] while \(\mathcal {L}(Y_i \,|\, X_i = x)\) is the gamma distribution with shape parameter \(\sqrt{x}\) and scale parameter \(2 + (x - 5)/\sqrt{2 + (x-5)^2}\). Figure 2 shows one such data set. In addition, one sees estimated \(\beta\)-quantile curves for levels \(\beta \in \{0.1, 0.25, 0.5, 0.75, 0.9\}\), resulting from the estimator \(\widehat{\varvec{F}}\).

Fig. 2
figure 2

A data set with estimated quantile curves

Table 3 Computation times in seconds and ratios of running times

Now we simulated 1000 such data sets and measured the times \(T_1, T_2, T_3\) for computing the estimator \(\widehat{\varvec{F}}\) via the standard, the modified and the abridged PAVA, respectively. Table 3 reports the sample means and standard deviations of these computation times in the 1000 simulations. In addition, one sees the averages and standard deviations of the ratios \(T_i/T_j\), for \(1 \le i < j \le 3\). It turned out that using the modified instead of the standard PAVA reduced the computation time by a factor of 3.46 already. Using the abridged PAVA yielded a further improvement by a factor of 8.90.

Fig. 3
figure 3

Boxplots of computation times and ratios of running times for varying sample sizes. The whiskers indicate the \(10\%\) and \(90\%\) sample quantiles. The other elements of the boxplots are standard. A logarithmic scale was used for both axes

Figure 3 displays the result of simulation experiments for sample sizes ranging from 200 to \(10\,000\), where the data were generated using the procedure mentioned earlier. The simulations indicate that the improvement due to using modified instead of standard PAVA is almost constant in n, whereas the improvement due to abridged instead of modified PAVA increases with n. Presumably, the complexity of the abridged PAVA for computing the isotonic distributional regression remains quadratic in n. But our numerical experiments show that the constant is substantially smaller than the one resulting from applying the usual PAVA with complexity O(n) for \(n - 1\) different levels of the response.

Numerical Experiment 2

The goal of this experiment is to study the influence of the strength of the monotone association between X and Y on the efficiency gain of the abridged PAVA for isotonic distributional regression. The gains of abridged PAVA are expected to be milder when Y is independent of X, and to become larger as the monotone association strengthens. The reason behind it is that, while the standard PAVA proceeds independently of the stochastic order, the abridged PAVA relies on the index \(j_o\) indicating the component increasing in \(\varvec{z}(t-1)\) and on the nature of the partition corresponding to \(A(\varvec{z}(t-1))\), at a certain state \(t \in \{1,\ldots ,n\}\) of the procedure. If the monotone association is weak, then the partition corresponding to \(A(\varvec{z}(t-1))\) tends to contain fewer blocks in total and relatively large blocks in the middle of \(\{1,\ldots ,n\}\). If the index \(j_o\) happens to lie in a block containing many indices to the right of \(j_o\), even the abridged PAVA will have to inspect all of these.

To demonstrate this claim, we simulated n independent bivariate Gaussian random vectors \((X,Y)^\top\) with correlation \(Corr(X,Y) = \rho \ge 0\). Note that the respective means and variances of X and Y have no influence on the results of the experiment. Indeed, the running times are invariant under strictly isotonic transformations of X and of Y. In particular, the simulations for \(\rho = 0\) cover all situations in which X and Y are stochastically independent with continuous distribution functions. The stochastic order between \(\mathcal {L}(Y \vert X = x_1)\) and \(\mathcal {L}(Y\vert X = x_2)\) for \(x_1 < x_2\) becomes stronger as the correlation \(\rho \in [0,1)\) increases, from an equality in distribution when \(\rho = 0\) to a deterministic ordering when \(\rho\) approaches 1. Now, for sample sizes n ranging from 200 to \(10\,000\) and for each correlation \(\rho \in \{0, 0.5, 0.9\}\), the mean and standard deviation of the time ratio \(T_3/T_1\) were estimated from \(1\,000\) repetitions. The results are summarized in Table 4. As expected, the efficiency gain is smallest for \(\rho = 0\). But even then, it is larger than 6 for \(n \ge 200\) and larger than 9 for \(n \ge 1\,000\).

Table 4 Means (and standard deviations) of the factor of improvement \(T_3/T_1\) for different correlation values \(\rho\) between X and Y and sample sizes n