1 Correction to: Statistical Papers https://doi.org/10.1007/s00362-021-01273-w

The proof of Theorem 1 is based on inequality (12). However, the lower bound of inequality (12) is not correct, because, although \({\mathscr {H}}_0\) ensures the convexity of \(G^{-1}\circ F\), the function \(G^{-1}(F-\epsilon )\) is not necessarily convex. This mistake invalidates the claim of Theorem 1, in fact, simulations show that, especially when G is heavy tailed and \(F=G\), the GRCM of \(F_n\) with respect to G may diverge from F. Accordingly, the consistency property claimed in Theorem 3, which is based on Theorem 1, is also invalidated.

The GRCM may still provide a valid and flexible approach for testing convexity of the generalised hazard function, thanks to the properties discussed in Sect. 3. Moreover, the simulation in Sect. 4.2 shows that the GRCM test for the IHR family, \(\mathrm{KS}_n\), is comparable to the consistent test \(\mathrm{KT}_n^1\), in terms of empirical power.

Although the consistency of the GRCM tests cannot be established as in Theorem 3, a weaker consistency result, for a general G, can be obtained by evaluating the null hypothesis, and the GRCM, over a restricted set, as shown below.

Let \(S_\nu =\{x:x\le F^{-1}(1-\nu )\}\), for some arbitrarily small \(\nu \in [0,1]\), and consider the hypothesis:

$$\begin{aligned} {\mathscr {H}}_0^\nu :G^{-1}\circ F\text { is convex on }S_\nu . \end{aligned}$$
(1)

The empirical counterpart of \(S_\nu \) is \(S_{n,\nu }=\{x:x\le X_{(n_\nu )}\}\), where \(X_{(n_\nu )}= F_n^{-1}(1-\nu )\) is the empirical quantile. Denote with \(h|_A\) the restriction of a function h to the set A. Accordingly, let \(F_{n,\nu }^G\) be the GRCM of \(F_n|_{S_{n,\nu }}\) with respect to G (that is, the largest function that does not exceed \(F_n|_{S_{n,\nu }}\) and such that \(G^{-1}\circ F_n|_{S_{n,\nu }}\) is convex). Subsequently, consider the following restricted test statistic:

$$\begin{aligned} \mathrm{KS}_{n,\nu }({\mathbf {X}})=\sup _{x \in S_{n,\nu }} \vert F_n(x)-F_{n,\nu }^G(x) \vert -\frac{1}{n}=\max _{i\in (1,n_\nu ]}\left( \frac{i-1}{n}-F_{n,\nu }^G(X_{(i)}) \right) . \end{aligned}$$
(2)

Clearly, \(F_{n,0}^G=F_n^G\) and therefore \(\mathrm{KS}_{n,0}({\mathbf {X}})=\mathrm{KS}_{n}({\mathbf {X}})\). Similarly to the case \(\nu =0\), the least favourable distribution of \(\mathrm{KS}_{n,\nu }\) is obtained by simulating from G, as established by Theorem 2. For some fixed \(\alpha \in (0,1)\), the test rejects \({\mathscr {H}}_0^\nu \) when \(\mathrm{KS}_{n,\nu }({\mathbf {x}})\ge c_{\alpha ,n,\nu }\), where \(c_{\alpha ,n,\nu }\) is the solution of \(P(\mathrm{KS}_{n,\nu }({\mathbf {Y}})\ge c_{\alpha ,n,\nu })\le \alpha \) (\(Y\sim G\)). For every \(\nu >0\), the restricted GRCM, \(F_{n,\nu }^G\), converges strongly and uniformly to F in \(S_\nu \), implying the following consistency property.

Theorem 1

If \({\mathscr {H}}_0^\nu \) is false, \(\lim _{n\rightarrow \infty }P(\mathrm{KS}_{n,\nu }({\mathbf {X}})\ge c_{\alpha ,n,\nu })=1\), for every \(\nu >0\).

Proof

Because \(F_n\) converges a.s. and uniformly to F, and \(G^{-1}\) is uniformly continuous in \([0,1-\nu ]\) (for \(\nu >0\)), then \(G^{-1}\circ F_n\) converges a.s. and uniformly to \(G^{-1}\circ F\) on \(S_\nu \). Recall that \((h)^{\mathbb {I}}\) denotes the GCM of a function h. If \({\mathscr {H}}_{0}^\nu \) is true, Marshall’s inequality gives \(\sup _{S_{n,\nu }}|G^{-1}\circ F_n(x)-G^{-1}\circ F(x)|\ge \sup _{S_{n,\nu }}|(G^{-1}\circ F_n|_{S_{n,\nu }})^{\mathbb {I}}(x)-G^{-1}\circ F(x)|\), which implies strong uniform consistency of \((G^{-1}\circ F_n|_{S_{n,\nu }})^{\mathbb {I}}\) in \({S_\nu }\) (if \(\nu >0\), \(F_n^{-1}(1-\nu )\) converges a.s. to \(F^{-1}(1-\nu )\)). Since G is absolutely continuous, then \(G\circ (G^{-1}\circ F_n|_{S_{n,\nu }})^{\mathbb {I}}=F_{n,\nu }^G\) (see Proposition 1) converges a.s. and uniformly to F in \(S_\nu \), for every \(\nu >0\). Then, as in the proof of Theorem 3, it can be shown that, under \({\mathscr {H}}_{0}^\nu \), \(c_{\alpha ,n,\nu }\rightarrow 0\) for \(n\rightarrow \infty \).

Denote with \(F^G_\nu \) the GRCM of \(F|_{S_\nu }\), formally,

$$\begin{aligned} F^G_\nu (x)=\sup \{u(x):G^{-1}\circ F\text { is convex in }S_\nu \text { and }u(y)\le F(y),\forall y\in S_\nu \}. \end{aligned}$$
(3)

If \(G^{-1}\circ F^G_\nu (x)\le (G^{-1}\circ F|_{S_\nu })^{\mathbb {I}}(x)\) then \(G\circ (G^{-1}\circ F|_{S_\nu })^{\mathbb {I}}(x)\) is the GRCM of \(F|_{S_\nu }\) at x; if \( G^{-1}\circ F^G_\nu (x)\ge (G^{-1}\circ F|_{S_\nu })^{\mathbb {I}}(x)\) then \(G^{-1}\circ F^G_\nu (x)\) is the GCM of \(G^{-1}\circ F|_{S_\nu }\) at x. Hence, \(F_\nu ^G=G\circ (G^{-1}\circ F|_{S_\nu })^{\mathbb {I}}\). Suppose that \({\mathscr {H}}_{0}^\nu \) is false, that is, \(G^{-1}\circ F\) is not convex on \(S_\nu \), and let \(d=\sup _{S_\nu } |F-F_\nu ^G|=\sup _{S_\nu } (F-F_\nu ^G)\). In this case, \(F_n\) converges uniformly to F, whereas \(F^G_{n,\nu }\) converges uniformly to \(F_\nu ^G\) on \(S_\nu \), with probability 1, moreover, \(d>0\). Therefore, given some \(\epsilon \in (0,d)\), there exists some \(n_0\) such that, for \(n>n_0\), \(\sup _{S_{n,\nu }} |F_n-F|<\frac{\epsilon }{2}\) and \(\sup _{S_{n,\nu }} |F^G_{n,\nu }-F_\nu ^G|<\frac{\epsilon }{2}\), with probability 1. Then, for \(n>n_0\)

$$\begin{aligned} F_n(x)-F^G_{n,\nu }(x)>F(x)-\frac{\epsilon }{2}-(F_\nu ^G(x)+\frac{\epsilon }{2})=F(x)-F_\nu ^G(x)-\epsilon \end{aligned}$$
(4)

almost surely, for every \(x\in S_{n,\nu }\), which implies

$$\begin{aligned}&\sup _{S_{n,\nu }}|F_n(x)-F^G_{n,\nu }(x)|> \sup _{S_{n,\nu }}| F(x)-F_\nu ^G(x)-\epsilon |\nonumber \\&\quad =\sup _{S_{n,\nu }} ( F(x)-F_\nu ^G(x))-\epsilon =d-\epsilon >0. \end{aligned}$$
(5)

Therefore, since \(\epsilon \) can be arbitrarily small, \(P(\sup _{S_{n,\nu }}|F_n(x)-F^G_{n,\nu }(x)|\ge d)\rightarrow 1\), for \(n\rightarrow \infty \). But since \(c_{\alpha ,n,\nu }\rightarrow 0\), then \(P(\mathrm{KS}_{n,\nu }({\mathbf {X}})\ge c_{\alpha ,n,\nu })\rightarrow 1\). \(\square \)