Abstract
In the context of estimating stochastically ordered distribution functions, the pool-adjacent-violators algorithm (PAVA) can be modified such that the computation times are reduced substantially. This is achieved by studying the dependence of antitonic weighted least squares fits on the response vector to be approximated.
Similar content being viewed by others
1 Introduction
Let \(\mathcal {X}\) be a set equipped with a binary relation \(\preceq\), for instance, some partial order. The general problem is as follows: For \(m \ge 2\) pairs \((x_1,z_1),\ldots , (x_m,z_m) \in \mathcal {X}\times \mathbb {R}\) and weights \(w_1,\ldots ,w_m > 0\), let
where
Suppose that \(\varvec{z}^{(0)}, \varvec{z}^{(1)}, \ldots , \varvec{z}^{(n)}\) are vectors in \(\mathbb {R}^m\) such that for \(1 \le t \le n\), the two vectors \(\varvec{z}^{(t-1)}\) and \(\varvec{z}^{(t)}\) differ only in a few components, and our task is to compute all antitonic (i.e. monotone decreasing) approximations \(A(\varvec{z}^{(0)}), A(\varvec{z}^{(1)}), \ldots , A(\varvec{z}^{(n)})\). We show that \(A(\varvec{z}^{(t)})\) can be computed efficiently, provided we know already \(A(\varvec{z}^{(t-1)})\). Briefly speaking, this is achieved by noticing that \(A(\varvec{z}^{(t-1)})\) and \(A(\varvec{z}^{(t)})\) share some identical components, and that the remaining components of \(A(\varvec{z}^{(t)})\) can be determined directly from \(A(\varvec{z}^{(t-1)})\) and \(\varvec{z}^{(t)}\) with only a few operations.
The efficient computation of a sequence of antitonic approximations appears naturally in the context of isotonic distributional regression, see Henzi et al. (2021), Mösching and Dümbgen (2020) and Jordan et al. (2021). There, one observes random pairs \((X_1,Y_1),\) \((X_2,Y_2), \ldots , (X_n,Y_n)\) in \(\mathcal {X}\times \mathbb {R}\) such that, conditional on \((X_i)_{i=1}^n\), the random variables \(Y_1, Y_2, \ldots , Y_n\) are independent with distribution functions \(F_{X_1}, F_{X_2}, \ldots , F_{X_n}\), where \((F_x)_{x \in \mathcal {X}}\) is an unknown family of distribution functions. Then the goal is to estimate the latter family under the sole assumption that \(F_{x} \ge F_{x'}\) pointwise whenever \(x \preceq x'\). This notion of ordering of distributions is known as stochastic ordering, or first order stochastic dominance. This isotonic distributional regression leads to the aforementioned least squares problem, where \(x_1, \ldots , x_m\) denote the different elements of \(\{X_1,X_2,\ldots ,X_n\}\), and \(\varvec{z}^{(t)}\) has components
with \(w_j := \#\{i \le n : X_i = x_j\}\), \(Y_{(0)}:=-\infty\) and \(Y_{(t)}\) is the t-th order statistic of the sample \(\{Y_1,Y_2,\ldots ,Y_n\}\).
Section 2 provides some facts about monotone least squares which are useful for the present task. For a complete account and derivations, we refer to Barlow et al. (1972) and Robertson et al. (1988). Then it is shown in Section 3 how to turn this into an efficient computation scheme in case of a total order \(\preceq\). Finally, we discuss the specific application to isotonic distributional regression, and provide numerical experiments which show that computation times of the naive approach are decreased substantially with our procedure.
2 Some Facts About Antitonic Least Squares Estimation
Since the sum on the right hand side of (1) is a strictly convex and coercive function of \(\varvec{f} \in \mathbb {R}^m\), and since \(\mathbb {R}^m_{\downarrow , \varvec{x}}\) is a closed and convex set, \(A(\varvec{z})\) is well-defined. It possesses several well-known characterizations, two of which are particularly useful for our considerations.
The first characterization uses local weighted averages. Let us first introduce some notations. In this article, upper, lower and level sets are seen as subsets of \(\{1, \ldots , m\}\) inheriting the structure of \((\mathcal {X},\preceq )\). More precisely, a set \(U \subset \{1, \ldots , m\}\) is an upper set if \(i \in U\) and \(x_i \preceq x_j\) imply that \(j \in U\). A set \(L \subset \{1, \ldots , m\}\) is a lower set if \(j \in L\) and \(x_i \preceq x_j\) imply that \(i \in L\). The families of all upper and all lower sets are denoted by \(\mathcal {U}\) and \(\mathcal {L}\), respectively. For a non-empty set \(S\subset \{1,\ldots ,m\}\), its weight and the weighted average of \(\varvec{z}\) over S are respectively defined as
Characterization I
For any index \(1 \le j \le m\),
For all vectors \(\varvec{f}\in \mathbb {R}^m\), numbers \(\xi \in \mathbb {R}\) and relations \(\ltimes\) in \(\{<,\le ,=,\ge ,>\}\), let
For example, the family of sets \([ \ \varvec{f} = \xi ]\) indexed by \(\xi \in \{ \; f_1, \ldots ,f_m\}\) yields a partition of \(\{1, \ldots , m\}\) such that two indices i and j belong to the same block if and only if \(f_i = f_j\). In case of \(\varvec{f} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\), \([ \ \varvec{f} < \xi ]\) and \([ \ \varvec{f} \le \xi ]\) are upper sets, whereas \([ \ \varvec{f} > \xi ]\) and \([ \ \varvec{f} \ge \xi ]\) are lower sets.
Characterization II
A vector \(\varvec{f} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\) equals \(A(\varvec{z})\) if and only if for any number \(\xi \in \{ \; f_1,\ldots ,f_m\}\),
In particular, \(\xi = M_{[ \ \varvec{f} = \xi ]}(\varvec{z})\).
One possible reference for Characterizations I and II is Domínguez-Menchero and González-Rodríguez (2007). They treat the case of a quasi-order \(\preceq\) and more general target functions \(\sum _{j=1}^m h_j( \; f_j)\) to be minimized over \(\varvec{f} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\). For the present setting with an arbitrary binary relation \(\preceq\) and weighted least squares, a relatively short and self-contained derivation of these two characterizations is available from the authors upon request.
The next lemma summarizes some facts about changes in \(A(\varvec{z})\) if some components of \(\varvec{z}\) are increased.
Lemma 2.1
Let \(\varvec{z}, \varvec{\tilde{z}} \in \mathbb {R}^m\) such that \(\varvec{\tilde{z}} \ge \varvec{z}\) component-wise. Then the following conclusions hold true for \(\varvec{f} := A(\varvec{z})\), \(\varvec{\tilde{f}}:=A(\varvec{\tilde{z}})\) and \(K := \{k : \tilde{z}_k > z_k\}\):
-
(i)
\(\varvec{f} \le \varvec{\tilde{f}}\) component-wise.
-
(ii)
\(\tilde{f}_i = f_i\) whenever \(f_i < \min _{k \in K} f_k\).
-
(iii)
\(\tilde{f}_i = f_i\) whenever \(\tilde{f}_i > \max _{k \in K} \tilde{f}_k\).
-
(iv)
\(\tilde{f}_i = \tilde{f}_j\) whenever \(f_i = f_j\) and \(x_i, x_j \preceq x_k\) for all \(k \in K\).
Figure 1 illustrates the statements of Lemma 2.1 on \(\mathbb {R}^2\) equipped with the componentwise order in case of \(K = \{j_o\}\). The colored areas show level sets of a hypothetical antitonic regression \(\varvec{f}\), and \(x_{j_o}\) is the point where \(\tilde{z}_{j_o} > z_{j_o}\). By part (ii) of Lemma 2.1, we know that \(\tilde{f}_i = f_i\) if \(f_i < f_{j_o}\), so the values of \(\varvec{f}\) and \(\varvec{\tilde{f}}\) are equal on the orange and yellow regions in the top right corner, which is indicated by saturated colors. Furthermore, when passing from \(\varvec{z}\) to \(\varvec{\tilde{z}}\), the slightly transparent pink, blue and green level sets on the bottom left (including the point \(x_{j_o}\)) can only be merged, but never be split. This follows from part (iv) of Lemma 2.1. Finally, for all points in the faded pink, blue and green areas, there is no statement about the behavior of the antitonic regression when passing from \(\varvec{z}\) to \(\varvec{\tilde{z}}\).
Proof of Lemma 2.1
Part (i) is a direct consequence of Characterization I.
As to part (ii), if \(f_i < \min _{k \in K} f_k\), then \(K \subset [ \ \varvec{f} > f_i]\), whence
This inequality and part (i) show that \(\tilde{f}_i = f_i\).
Part (iii) is proved analogously. If \(\tilde{f}_i > \max _{k \in K} \tilde{f}_k\), then \(K \subset [ \; \varvec{\tilde{f}} < \tilde{f}_i]\), whence
This inequality and part (i) show that \(\tilde{f}_i = f_i\).
Part (iv) follows directly from parts (i) and (iii). Let i and j be different indices such that \(f_i = f_j\) and \(x_i, x_j \preceq x_k\) for all \(k \in K\). It follows from \(\varvec{\tilde{f}} \in \mathbb {R}^m_{\downarrow , \varvec{x}}\) that \(\tilde{f}_i, \tilde{f}_j \ge \max _{k \in K} \tilde{f}_k\). Consequently, if \(\tilde{f}_j > \tilde{f}_i\), then \(\tilde{f}_j > \max _{k \in K} \tilde{f}_k\), so parts (i) and (iii) would imply that
contradicting \(\tilde{f}_j > \tilde{f}_i\).
The Special Case of a Total Order
If one replaces the binary relation \(\preceq\) by a total order \(\le\) on \(\mathcal {X}\), as for example in the case of the usual total order on a subset of \(\mathbb {R}\), the conclusions of Lemma 2.1 take a simpler form. In case of a total order, we assume that the covariates are ordered as follows
so that \(i\le j\) implies that \(x_i\le x_j\), while \(x_i < x_j\) implies that \(i < j\).
Corollary 2.2
Let \(\varvec{z}, \varvec{\tilde{z}} \in \mathbb {R}^m\) such that \(\varvec{z} \le \varvec{\tilde{z}}\) component-wise. Then the following conclusions hold true for \(\varvec{f} := A(\varvec{z})\) and \(\varvec{\tilde{f}}:=A(\varvec{\tilde{z}})\):
-
(i)
\(\varvec{f} \le \varvec{\tilde{f}}\) component-wise.
-
(ii)
Let \(k \in \{1,\ldots ,m-1\}\) such that \(f_k > f_{k+1}\) and \((\tilde{z}_j)_{j> k} = (z_j)_{j > k}\). Then
$$( \; \tilde{f}_j)_{j> k} \ = \ ( \; f_j)_{j > k} .$$ -
(iii)
Let \(k \in \{2,\ldots ,m\}\) such that \(\tilde{f}_{k-1} > \tilde{f}_k\) and \((\tilde{z}_j)_{j< k} = (z_j)_{j < k}\). Then
$$( \; \tilde{f}_j)_{j< k} \ = \ ( \; f_j)_{j < k} .$$ -
(iv)
Let \(k \in \{2,\ldots ,m\}\) such that \((\tilde{z}_j)_{j< k} = (z_j)_{j < k}\). Then
$$\{j< k : \tilde{f}_j> \tilde{f}_{j+1}\} \ \subset \ \{j < k : f_j > f_{j+1}\} .$$
3 A Sequential Algorithm for Total Orders
Lemma 2.1 is potentially useful to accelerate algorithms for isotonic distributional regression with arbitrary partial orders, possibly in conjunction with the recursive partitioning algorithm by Luss and Rosset (2014), but this will require additional research. Now we focus on improvements of the well-known pool-adjacent-violators algorithm (PAVA) for a total order.
3.1 General Considerations
In what follows, we assume that \(x_1< \cdots < x_m\), so \(\mathbb {R}^m_{\downarrow , \varvec{x}}\) coincides with \(\mathbb {R}^m_{\downarrow }= \bigl \{ \varvec{f} \in \mathbb {R}^m : f_1 \ge \cdots \ge f_m\}\). To understand the different variants of the PAVA, let us recall two basic facts about \(A(\varvec{z})\). Let \(\mathcal {P}= (P_1,\ldots ,P_d)\) be a partition of \(\{1,\ldots ,m\}\) into blocks \(P_s = \{b_{s-1}+1,\ldots ,b_s\}\), where \(0 = b_0< b_1< \cdots < b_d = m\), and let \(\mathbb {R}^m_{\mathcal {P}}\) be the set of vectors \(\varvec{f} \in \mathbb {R}^m\) such that \(f_i = f_j\) whenever i, j belong to the same block of \(\mathcal {P}\).
Fact 1
Let \(r_1> \cdots > r_d\) be the sorted elements of \(\{A_i(\varvec{z}) : 1 \le i \le m\}\), and let \(\mathcal {P}\) consist of the blocks \(P_s = \{i : A_i(\varvec{z}) = r_s\}\). Then \(r_s = M_{P_s}^{}(\varvec{z})\) for \(1 \le s \le d\).
Fact 2
Suppose that \(A(\varvec{z}) \in \mathbb {R}^m_{\mathcal {P}}\) for a given partition \(\mathcal {P}\) with \(d \ge 2\) blocks. If \(s \in \{1,\ldots ,d-1\}\) such that \(M_{P_s}^{}(\varvec{z}) \le M_{P_{s+1}}^{}(\varvec{z})\), then \(A_i(\varvec{z})\) is constant in \(i \in P_s \cup P_{s+1}\). That means, one may replace \(\mathcal {P}\) with a coarser partition by pooling \(P_s\) and \(P_{s+1}\) and still, \(A(\varvec{z}) \in \mathbb {R}^m_{\mathcal {P}}\).
Fact 1 is a direct consequence of Characterization II. To verify Fact 2, suppose that \(\varvec{f} \in \mathbb {R}^m_{\downarrow }\cap \mathbb {R}^m_{\mathcal {P}}\) such that \(f_i = r_s\) for \(i \in P_s\), \(f_i = r_{s+1}\) for \(i \in P_{s+1}\), and \(r_s > r_{s+1}\). Now we show that \(\varvec{f}\) cannot be equal to \(A(\varvec{z})\). For \(t \ge 0\) let \(\varvec{f}(t) \in \mathbb {R}^m_{\mathcal {P}}\) be given by
Then \(\varvec{f}(0) = \varvec{f}\), and \(\varvec{f}(t) \in \mathbb {R}^m_{\downarrow }\) if \(t \le (r_s - r_{s+1})/(w_{P_{s+1}}^{-1} + w_{P_s}^{-1})\). But
so for sufficiently small \(t>0\), \(\varvec{f}(t)\in \mathbb {R}^m_{\downarrow }\) and is superior to \(\varvec{f}(0)\). Hence \(\varvec{f} \ne A(\varvec{z})\).
Facts 1 and 2 indicate already a general PAV strategy to compute \(A(\varvec{z})\). One starts with the finest partition \(\mathcal {P}= (\{1\}, \ldots , \{m\})\). As long as \(\mathcal {P}\) contains two neighboring blocks \(P_s\) and \(P_{s+1}\) such that \(M_{P_s}^{}(\varvec{z}) \ge M_{P_{s+1}}^{}(\varvec{z})\), the partition \(\mathcal {P}\) is coarsened by replacing \(P_s\) and \(P_{s+1}\) with the block \(P_s \cup P_{s+1}\).
Standard PAVA
Specifically, one works with three tuples: \(\mathcal {P}= (P_1,\ldots ,P_d)\) is a partition of \(\{1,\ldots ,b_d\}\) into blocks \(P_s = \{b_{s-1}+1,\ldots ,b_s\}\), where \(0 = b_0< b_1< \cdots < b_d\). The number \(b_d\) is running from 1 to m, and the number \(d \ge 1\) changes during the algorithm, too. The tuples \(\mathcal {W}= (W_1,\ldots ,W_d)\) and \(\mathcal {M}= (M_1,\ldots ,M_d)\) contain the corresponding weights \(W_s = w_{P_s}^{}\) and weighted means \(M_s = M_{P_s}^{}(\varvec{z})\). Before increasing \(b_d\), the tuples \(\mathcal {P}\), \(\mathcal {W}\) and \(\mathcal {M}\) describe the minimizer of \(\sum _{i=1}^{b_d} w_i ( \; f_i - z_i)^2\) over all \(\varvec{f} \in \mathbb {R}^{b_d}_{\downarrow }\). Here is the complete algorithm:
Initialization: We set \(\mathcal {P}\leftarrow (\{1\})\), \(\mathcal {W}\leftarrow (w_1)\), \(\mathcal {M}\leftarrow (z_1)\), and \(d \leftarrow 1\).
Induction step: If \(b_d < m\), we add a new block by setting
and \(d \leftarrow d+1\). Then, while \(d > 1\) and \(M_{d-1} \le M_d\), we pool the “violators” \(P_{d-1}\) and \(P_d\) by setting
and \(d \leftarrow d-1\).
Finalization: Eventually, \(\mathcal {P}\) is a partition of \(\{1,\ldots ,m\}\) into blocks such that \(M_1> \cdots > M_d\) and
Modified PAVA
In our specific applications of the PAVA, we are dealing with vectors \(\varvec{z}\) containing larger blocks \(\{a,\ldots ,b\}\) on which \(i \mapsto z_i\) is constant. Indeed, in regression settings with continuously distributed covariates and responses, \(\varvec{z}\) will always be a \(\{0,1\}\)-valued vector. Then it is worthwhile to utilize fact 2 and modify the initialization as well as the very beginning of the induction step as follows:
For the initialization, we determine the largest index \(b_1\) such that \(z_1 = \cdots = z_{b_1}\) and the corresponding weight \(W_{P_1}\) with \(P_1 = \{1,\ldots ,b_1\}\). Then we set \(\mathcal {P}\leftarrow (P_1)\), \(\mathcal {W}\leftarrow (w_{P_1}^{})\) and \(\mathcal {M}\leftarrow (z_{b_1}^{})\), where \(P_1 = \{1,\ldots ,b_1\}\).
At the beginning of the induction step, we determine the largest index \(b_{d+1} > b_d\) such that \(z_{b_d+1} = \cdots = z_{b_{d+1}}\) and the corresponding weight \(W_{P_{d+1}}^{}\) with \(P_{d+1} = \{b_d+1,\ldots ,b_{d+1}\}\). Then we set \(\mathcal {P}\leftarrow (\mathcal {P},P_{d+1})\), \(\mathcal {W}\leftarrow (\mathcal {W},W_{P_{d+1}})\), \(\mathcal {M}\leftarrow (\mathcal {M}, z_{b_{d+1}}^{})\), and \(d \leftarrow d+1\).
Abridged PAVA
Suppose that we have computed \(A(\varvec{z})\) with corresponding tuples \(\mathcal {P}= (P_1,\ldots ,P_d)\), \(\mathcal {W}= (W_1,\ldots ,W_d)\) and \(\mathcal {M}= (M_1,\ldots ,M_d)\) via the PAVA. Now let \(\varvec{\tilde{z}} \in \mathbb {R}^m\) such that \(\tilde{z}_{j_o} > z_{j_o}\) for one index \(j_o \in \{1,\ldots ,m\}\), while \((\tilde{z}_j)_{j \ne j_o} = (z_j)_{j \ne j_o}\). Let \(j_o \in P_{s_o}\) with \(s_o \in \{1,\ldots ,d\}\). By parts (ii) and (iv) of Corollary 2.2, the partition corresponding to \(A(\varvec{\tilde{z}})\) will be a coarsening of the partition with the following blocks:
Moreover, \(A_i(\varvec{\tilde{z}}) = A_i(\varvec{z})\) for \(i > b_{s_o}\). This allows us to compute \(A(\varvec{\tilde{z}})\) as follows, keeping copies of the auxiliary objects for \(A(\varvec{z})\) and indicating this with a superscript \(\varvec{z}\):
Initialization: We determine \(s_o \in \{1,\ldots ,d^{\varvec{z}}\}\) such that \(j_o \in P_{s_o}^{\varvec{z}}\). Then we set
and \(d \leftarrow s_o\). While \(d > 1\) and \(M_{d-1} \le M_d\), we pool the violators \(P_{d-1}\) and \(P_d\) as in the induction step of PAVA. (This initialization is justified by part (iv) of Corollary 2.2.)
Induction step: If \(j_o < b_{s_o}^{\varvec{z}}\), we run the induction step of PAVA for \(b_d\) running from \(j_o+1\) to \(b_{s_o}^{\varvec{z}}\) with \(\varvec{\tilde{z}}\) in place of \(\varvec{z}\).
Finalization: If \(b_{s_o}^{\varvec{z}} < m\), we set
and \(d \leftarrow d + d^{\varvec{z}} - s_o\). The new pair \((\mathcal {P},\mathcal {M})\) yields the vector \(A(\varvec{\tilde{z}})\). This finalization is justified by part (ii) of Corollary 2.2.
Computational Complexity
It directly follows from the algorithmic description that when \(A(\varvec{z})\) is available, the abridged PAVA for computing \(A(\varvec{\tilde{z}})\) requires not more operations than the standard PAVA. Its computational complexity is therefore at most of order O(m) if \(x_1, \dots , x_m\) are already sorted. More precisely, the number of averaging operations in the abridged PAVA is bounded from above by \(d^{\varvec{z}} + (b_{s_o}^{\varvec{z}} - b_{s_o - 1}^{\varvec{z}})\), where \(d^{\varvec{z}}\) is the partition size of the antitonic regression \(A(\varvec{z})\) and \(b_{s_o}^{\varvec{z}} - b_{s_o - 1}^{\varvec{z}}\) is the number of elements in the set \(P_{s_o}^{\varvec{z}}\) containing the index \(j_o\) where the value of \(\varvec{z}\) changes. In many practical applications this number is much smaller than m, but in the worst case it may equal exactly m; for example, let \(w_i = 1\) and \(z_i = m - i\) for \(i = 1, \dots , m\), \(j_o = m\), and \(\tilde{z}_m = m^2\).
Numerical Example
We illustrate the previous procedures with two vectors \(\varvec{z}, \varvec{\tilde{z}} \in \mathbb {R}^9\) and \(\varvec{w} = (1)_{j=1}^9\). Table 1 shows the main steps of the PAVA for \(\varvec{z}\). The first line shows the components of \(\varvec{z}\), the other lines contain the current candidate for \(( \; f_j)_{j=1}^{b_d}\), where \(\varvec{f} = A(\varvec{z})\) eventually, and the current partition \(\mathcal {P}\) is indicated by extra vertical bars. Table 2 shows the abridged PAVA for two different vectors \(\varvec{\tilde{z}}\).
3.2 Application to Isotonic Distributional Regression
Now we consider a regression framework similar to the one discussed in Mösching and Dümbgen (2020), Henzi et al. (2021) and Jordan et al. (2021). We observe pairs \((X_1,Y_1),\) \((X_2,Y_2),\ldots ,(X_n,Y_n)\) consisting of numbers \(X_i \in \mathcal {X}\) (covariate) and \(Y_i \in \mathbb {R}\) (response), where \(\mathcal {X}\) is a given real interval. Conditional on \((X_i)_{i=1}^n\), the observations \(Y_1, Y_2, \ldots , Y_n\) are viewed as independent random variables such that for \(x \in \mathcal {X}\) and \(y \in \mathbb {R}\),
Here \((F_x)_{x \in \mathcal {X}}\) is an unknown family of distribution functions. We only assume that \(F_x(y)\) is non-increasing in \(x \in \mathcal {X}\) for any fixed \(y \in \mathbb {R}\). That means, the family \((F_x)_{x \in \mathcal {X}}\) is increasing with respect to stochastic order.
Let \(x_1< x_2< \cdots < x_m\) be the elements of \(\{X_1,X_2,\ldots ,X_n\}\), and let
Then one can estimate \(\varvec{F}(y) := (F_{x_j}(y))_{j=1}^m\) by
where \(\varvec{z}(y)\) has components
Suppose we have rearranged the observations such that \(Y_1 \le Y_2 \le \cdots \le Y_n\). Let \(\varvec{z}^{(0)} := \varvec{0}\) and
for \(1 \le t \le n\). Note that \(\varvec{z}^{(t-1)}\) and \(\varvec{z}^{(t)}\) differ in precisely one component, and that
Thus it suffices to compute \(A(\varvec{z}^{(t)})\) for \(t = 0,1,\ldots ,n\). But \(A(\varvec{z}^{(0)}) = \varvec{0}\), \(A(\varvec{z}^{(n)}) = \varvec{1}\), and for \(1 \le t < n\), one may apply the abridged PAVA to the vectors \(\varvec{z} := \varvec{z}^{(t-1)}\) and \(\varvec{\tilde{z}} := \varvec{z}^{(t)}\). This leads to an efficient algorithm to compute all vectors \(A(\varvec{z}^{(t)})\), \(0 \le t \le n\), if implemented properly.
Numerical Experiment 1
We generated data sets with \(n = 1000\) independent observation pairs \((X_i,Y_i)\), \(1 \le i \le n\), where \(X_i\) is uniformly distributed on [0, 10] while \(\mathcal {L}(Y_i \,|\, X_i = x)\) is the gamma distribution with shape parameter \(\sqrt{x}\) and scale parameter \(2 + (x - 5)/\sqrt{2 + (x-5)^2}\). Figure 2 shows one such data set. In addition, one sees estimated \(\beta\)-quantile curves for levels \(\beta \in \{0.1, 0.25, 0.5, 0.75, 0.9\}\), resulting from the estimator \(\widehat{\varvec{F}}\).
Now we simulated 1000 such data sets and measured the times \(T_1, T_2, T_3\) for computing the estimator \(\widehat{\varvec{F}}\) via the standard, the modified and the abridged PAVA, respectively. Table 3 reports the sample means and standard deviations of these computation times in the 1000 simulations. In addition, one sees the averages and standard deviations of the ratios \(T_i/T_j\), for \(1 \le i < j \le 3\). It turned out that using the modified instead of the standard PAVA reduced the computation time by a factor of 3.46 already. Using the abridged PAVA yielded a further improvement by a factor of 8.90.
Figure 3 displays the result of simulation experiments for sample sizes ranging from 200 to \(10\,000\), where the data were generated using the procedure mentioned earlier. The simulations indicate that the improvement due to using modified instead of standard PAVA is almost constant in n, whereas the improvement due to abridged instead of modified PAVA increases with n. Presumably, the complexity of the abridged PAVA for computing the isotonic distributional regression remains quadratic in n. But our numerical experiments show that the constant is substantially smaller than the one resulting from applying the usual PAVA with complexity O(n) for \(n - 1\) different levels of the response.
Numerical Experiment 2
The goal of this experiment is to study the influence of the strength of the monotone association between X and Y on the efficiency gain of the abridged PAVA for isotonic distributional regression. The gains of abridged PAVA are expected to be milder when Y is independent of X, and to become larger as the monotone association strengthens. The reason behind it is that, while the standard PAVA proceeds independently of the stochastic order, the abridged PAVA relies on the index \(j_o\) indicating the component increasing in \(\varvec{z}(t-1)\) and on the nature of the partition corresponding to \(A(\varvec{z}(t-1))\), at a certain state \(t \in \{1,\ldots ,n\}\) of the procedure. If the monotone association is weak, then the partition corresponding to \(A(\varvec{z}(t-1))\) tends to contain fewer blocks in total and relatively large blocks in the middle of \(\{1,\ldots ,n\}\). If the index \(j_o\) happens to lie in a block containing many indices to the right of \(j_o\), even the abridged PAVA will have to inspect all of these.
To demonstrate this claim, we simulated n independent bivariate Gaussian random vectors \((X,Y)^\top\) with correlation \(Corr(X,Y) = \rho \ge 0\). Note that the respective means and variances of X and Y have no influence on the results of the experiment. Indeed, the running times are invariant under strictly isotonic transformations of X and of Y. In particular, the simulations for \(\rho = 0\) cover all situations in which X and Y are stochastically independent with continuous distribution functions. The stochastic order between \(\mathcal {L}(Y \vert X = x_1)\) and \(\mathcal {L}(Y\vert X = x_2)\) for \(x_1 < x_2\) becomes stronger as the correlation \(\rho \in [0,1)\) increases, from an equality in distribution when \(\rho = 0\) to a deterministic ordering when \(\rho\) approaches 1. Now, for sample sizes n ranging from 200 to \(10\,000\) and for each correlation \(\rho \in \{0, 0.5, 0.9\}\), the mean and standard deviation of the time ratio \(T_3/T_1\) were estimated from \(1\,000\) repetitions. The results are summarized in Table 4. As expected, the efficiency gain is smallest for \(\rho = 0\). But even then, it is larger than 6 for \(n \ge 200\) and larger than 9 for \(n \ge 1\,000\).
Availability of Data and Material
Not applicable.
Code availability
R code is available at https://github.com/AlexanderHenzi/abridgedPava.
References
Barlow RE, Bartholomew DJ, Bremner JM, Brunk HD (1972) Statistical inference under order restrictions. The theory and application of isotonic regression. John Wiley & Sons, London-New York-Sydney. Wiley Series in Probability and Mathematical Statistics
Domínguez-Menchero JS, González-Rodríguez G (2007) Analyzing an extension of the isotonic regression problem. Metrika 66:19–30
Henzi A, Ziegel JF, Gneiting T (2021) Isotonic distributional regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 83:963–993
Jordan AI, Mühlemann A, Ziegel JF (2021) Characterizing the optimal solutions to the isotonic regression problem for identifiable functionals. Annals of the Institute of Statistical Mathematics to appear
Luss R, Rosset S (2014) Generalized isotonic regression. J Comput Graph Statist 23 192–210. https://doi.org/10.1080/10618600.2012.741550
Mösching A, Dümbgen L (2020) Monotone least squares and isotonic quantiles. Electron J Stat 14 24–49. https://doi.org/10.1214/19-EJS1659
Robertson T, Wright FT, Dykstra RL (1988) Order restricted statistical inference. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics, John Wiley & Sons, Ltd., Chichester
Acknowledgements
The authors are grateful to a reviewer for constructive comments.
Funding
Open access funding provided by University of Bern. This work was supported by Swiss National Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
Not applicable.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Henzi, A., Mösching, A. & Dümbgen, L. Accelerating the Pool-Adjacent-Violators Algorithm for Isotonic Distributional Regression. Methodol Comput Appl Probab 24, 2633–2645 (2022). https://doi.org/10.1007/s11009-022-09937-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11009-022-09937-2