1 Introduction and results

Consider two simple symmetric random walks in one dimension. The first, at each step independently, jumps upwards with probability 1/2 or downwards with probability 1/2. The second begins facing upwards and, at each step independently, decides to take a step in the direction it is facing with probability 1/2; or switches direction and takes a step the other way with probability 1/2.

We call the first of these two random walks the compass random walk, as it has an in-built sense of direction, and the second the switch random walk, as it only decides whether or not to switch directions. These two random walks have exactly the same distribution—they are simple symmetric random walks—although, as we will see when we define them rigorously, they are different functions of the underlying randomness. This means that when we talk about noise sensitivity or dynamical sensitivity of the two walks, they may (and do) have very different properties.

We now define carefully the objects of interest. Let \(X_1,X_2,\ldots \) be independent random variables satisfying

$$\begin{aligned} \mathbb {P}(X_i = 1) = \mathbb {P}(X_i = -1) = 1/2 \end{aligned}$$

for each \(i\in \mathbb {N}\). Define, for each \(n\ge 0\),

$$\begin{aligned} Y_n = \sum _{j=1}^n X_j \end{aligned}$$

and

$$\begin{aligned} Z_n = \sum _{k=1}^n \prod _{j=1}^k X_j \end{aligned}$$

where we take the empty sum to be zero, so \(Y_0=Z_0=0\). We call \(Y = (Y_n,\, n\ge 0)\) the compass random walk, and \(Z = (Z_n,\, n\ge 0)\) the switch random walk. We can think of \(Y = Y(X)\) and \(Z=Z(X)\) as functions of the sequence of random variables \(X=(X_1,X_2,\ldots )\). It is easy to see that, although they are different functions, the two walks Y and Z have the same distribution. Indeed, the written descriptions at the beginning of this section make clear that each of the two walks is a natural one-dimensional interpretation of the “ant in the labyrinth” or the “drunkard’s walk”. However, Z is more sensitive than Y to changes in the sequence X, in a sense that we will make precise below.

We now introduce dynamical versions of our random walks Y and Z. For each \(j\ge 1\), let \((N_j(t), t\ge 0)\) be an independent Poisson process of rate 1, and for each \(i\ge 0\), let \(X_j^i\) be an independent random variable with \(\mathbb {P}(X_j^i = 1) = \mathbb {P}(X_j^i = -1) = 1/2\). Then define

$$\begin{aligned} X_j(t) = X_j^i \,\,\text { whenever }\,\, N_j(t) = i. \end{aligned}$$

In words, \(X_j(t)\) has the same distribution as \(X_j\) and rerandomises itself at the times of the Poisson process \(N_j(t)\). Write \(Y(t) = Y(X(t))\) and \(Z(t) = Z(X(t))\), or more explicitly

$$\begin{aligned} Y_n(t) = \sum _{j=1}^n X_j(t) \,\,\,\,\,\,\text { and }\,\,\,\,\,\, Z_n(t) = \sum _{k=1}^n \prod _{j=1}^k X_j(t) \end{aligned}$$

for each \(n\ge 0\).

For each fixed \(t\ge 0\), the sequences \(Y(t) = (Y_0(t), Y_1(t),\ldots )\) and \(Z(t) = (Z_0(t), Z_1(t),\ldots )\) are simple symmetric random walks and therefore recurrent almost surely, in that \(Y_n(t)=0\) for infinitely many values of n almost surely, and similarly for \(Z_n(t)\). Benjamini et al. [4, Corollary 1.10] showed that recurrence for Y is dynamically stable in that

$$\begin{aligned} \mathbb {P}(\forall t\ge 0,\,\, Y_n(t) = 0 \text { for infinitely many values of } n) = 1. \end{aligned}$$

Our main result is that, in contrast, recurrence for Z is dynamically sensitive. Define

$$\begin{aligned} \mathcal {E}= \{t\in [0,1] : Z_n(t) \rightarrow \infty \,\text { as } n\rightarrow \infty \},\\ \mathcal {E}_0 = \{t\in [0,1] : \liminf _{n\rightarrow \infty } Z_n(t) > 0\}, \end{aligned}$$

and more generally for \(\alpha \ge 0\),

$$\begin{aligned} \mathcal {E}_\alpha = \Big \{t\in [0,1] : \liminf _{n\rightarrow \infty }\frac{Z_n(t)}{n^\alpha } > 0\Big \}. \end{aligned}$$

Theorem 1

There exist exceptional times of transience for the switch random walk: \(\mathcal {E}\) is non-empty almost surely. In fact, the Hausdorff dimension of \(\mathcal {E}_\alpha \) equals 1/2 almost surely for any \(\alpha \in [0,1/2)\). On the other hand, \(\mathcal {E}_\alpha \) is empty almost surely for any \(\alpha >1/2\).

It is an interesting question as to whether \(\mathcal {E}_{1/2}\) is empty or not. It is possible that the methods that we use to prove Theorem 1 could be extended to investigate this more delicate case, but this would require more detailed analysis of random walk sample paths that is beyond the scope of this paper.

We also show that the event that \(Z_n\) is positive is noise sensitive. In fact we prove a stronger quantitative noise sensitivity result.

Theorem 2

Let \((\varepsilon _n, n\ge 1)\) be any sequence in (0, 1) such that \(n\varepsilon _n\rightarrow \infty \). The sequence of events \((\{Z_n>0\}, n\ge 1)\) is quantitatively noise sensitive with respect to the sequence \((\varepsilon _n, n\ge 1)\), by which we mean that

$$\begin{aligned} \mathbb {P}(Z_n(0)>0 \text { and } Z_n(\varepsilon _n)>0) - \mathbb {P}(Z_n(0)>0)^2 \rightarrow 0 \end{aligned}$$

as \(n\rightarrow \infty \).

We note that the usual definition of (quantitative) noise sensitivity uses \(-\log (1-\varepsilon _n)\) in place of \(\varepsilon _n\) above, but since \(\varepsilon _n\in (0,1)\), this is equivalent to our statement.

We observe that if \(\liminf n\varepsilon _n <\infty \), then for arbitrarily large values of n none of the first n bits are rerandomised by time \(\varepsilon _n\), and therefore one cannot expect the events \(\{Z_n(0)>0\}\) and \(\{Z_n(\varepsilon _n)>0\}\) to decorrelate. In this sense Theorem 2 is as strong as it possibly could be; we say that the events \((\{Z_n>0\}, n\ge 1)\) are maximally noise sensitive.

Again, Theorem 2 is in stark contrast to the corresponding statement for the compass random walk. In fact, the event that \(Y_n\) is positive is known to be noise stable [5], in that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _n \mathbb {P}(\text {sign}\,Y_n(0) \ne \text {sign}\,Y_n(\varepsilon )) = 0. \end{aligned}$$

2 Background and notation

2.1 Motivation and existing results

Noise sensitivity and dynamical sensitivity has been an active area of research in probability at least since the papers of Häggström et al. [14] and Benjamini et al. [5]. One of the highlights of the subject is the proof that the existence of an infinite component in critical percolation in two dimensions is dynamically sensitive [12, 20]. The survey of Steif [21] and book by Garban and Steif [13] provide further background and references.

Benjamini et al. [4] considered many properties of a quite general dynamical sequence of random variables, incorporating results on what we call the compass random walk Y. They showed that for the compass random walk, as well as the dynamical stability of recurrence that we mentioned before Theorem 1, the strong law of large numbers and the law of the iterated logarithm are also dynamically stable: almost surely there are no exceptional times at which either of these laws does not hold for Y(t). The paper [4] provided the initial motivation for our project, as we wished to know more about the sensitivity to dynamics of random walks, in particular whether there exist one-dimensional random walks for which recurrence is dynamically sensitive.

It is not too difficult to check that the strong law of large numbers is dynamically stable for the switch random walk as well as the compass random walk, but it follows from our results that the law of the iterated logarithm is dynamically sensitive for the switch walk; indeed, by symmetry, Theorem 1 implies that there almost surely exist times t at which \(Z_n(t)\) is negative for all large n.

Benjamini et al. [4] also considered random walks in higher dimensions. They showed that in \(\mathbb {Z}^d\), transience for the higher-dimensional analogue of the compass random walk is dynamically stable when \(d\ge 5\). For \(d\in \{3,4\}\) they showed that transience is dynamically sensitive and the set of exceptional times almost surely has Hausdorff dimension \((4-d)/2\). They conjectured that for \(d=2\) recurrence should be dynamically sensitive, which was proven by Hoffman [15], who also showed that the Hausdorff dimension of the set of exceptional times of transience is 1 almost surely. Hoffman and Amir [2] then showed that almost surely there were times where the origin was the only position to be visited finitely many times. Further properties of dynamical random walks were investigated by Khoshnevisan et al. [16, 17].

The sequences \(\{Y_n>0\}\) and \(\{Z_n>0\}\) have exactly the same distribution—as sequences—and yet one is noise stable and one is noise sensitive. Warren [23], inspired by work of Tsirelson [22], gave a similar example of such a pair: writing

$$\begin{aligned} W_n = \sum _{k=1}^n\text {sign}(W_{k-1})X_k, \end{aligned}$$

the process \((W_n, n\ge 0)\) is also a simple symmetric random walk, and therefore has the same distribution as \((Y_n, n\ge 0)\), yet the events \(\{W_n>0\}\) are noise sensitive.

The object that we refer to as the switch random walk is also known by other names. It has been called the coin-turning random walk by Engländer and Volkov who introduced more general versions in [9], and these were further studied by Engländer et al. [10]. It has also been called the bootstrap random walk by Collevecchio, Hamza and Shi, who studied the pair (YZ) in [8]; Collevecchio, Hamza and Liu gave a further generalisation in [7].

2.2 Layout of paper

This paper is organised as follows. In Sect. 2.3 we introduce some notation and outline some well-known facts about random walks that will be used extensively in our proofs. In Sect. 3 we give a rough sketch of the proofs of Theorems 1 and 2 that should give the reader an idea of the main arguments involved. We then carry out the proof of Theorem 2 in Sect. 4. The proof of Theorem 1 is substatially more complex, and we give an outline in Sect. 5, which reduces the bulk of the task to proving two propositions, Proposition 1 for the lower bound on the Hausdorff dimension and Proposition 2 for the upper bound, together with several technical lemmas. The proof of Proposition 1 is the most interesting part of the paper and substantially different from existing proofs of related results. Rather than relying on the methods detailed in [13] such as randomised algorithms or the spectral sample, it instead uses more hands-on methods, leaning heavily on the independence of increments of random walks. We carry this out in Sect. 6. Then in Sect. 7 we prove Proposition 2, which mainly consists of elementary but intricate approximations. Finally, in Sect. 8 we prove the technical lemmas required to complete the proof of Theorem 1.

2.3 Notation and preparatory results

Throughout, we write \(f(n)\lesssim g(n)\) if there exists a constant \(c\in (0,\infty )\) such that \(f(n)\le c g(n)\) for all large n, and \(f(n)\asymp g(n)\) if both \(f(n)\lesssim g(n)\) and \(g(n)\lesssim f(n)\). We use \(\approx \) only in heuristics to mean “is roughly equal to”. We write \(\mathbb {P}_x\) for the probability measure under which our random walks begin from x, rather than 0. To be precise, we mean that under \(\mathbb {P}_x\),

$$\begin{aligned} Z_n = x + \sum _{k=1}^n \prod _{j=1}^k X_j \end{aligned}$$

and similarly for \(Z_n(t)\), \(Y_n\) and \(Y_n(t)\).

We will use the Fortuin–Kasteleyn–Ginibre (FKG) inequality [11] using the partial order on \(\{-1,1\}^\mathbb {N}\) given by setting \((x_1,x_2,\ldots )\le (y_1,y_2,\ldots )\) if \(x_i\le y_i\) for all \(i\in \mathbb {N}\). This says that if f and g are either both increasing functions or both decreasing functions with respect to this partial order, then

$$\begin{aligned} \mathbb {E}[f(X)g(X)]\ge \mathbb {E}[f(X)]\mathbb {E}[g(X)] \end{aligned}$$
(1)

and if f is increasing but g is decreasing, then

$$\begin{aligned} \mathbb {E}[f(X)g(X)]\le \mathbb {E}[f(X)]\mathbb {E}[g(X)]. \end{aligned}$$
(2)

We gather here some useful and well-known facts about simple symmetric random walks.

Lemma 1

Suppose that \(j\ge 2\). If \(|z|\le j^{3/4}\) and \(z\equiv j\) (mod 2), then

$$\begin{aligned} \mathbb {P}(Z_j = z) \asymp \frac{1}{j^{1/2}} \exp \Big (-\frac{z^2}{2j}\Big ). \end{aligned}$$

If \(z\not \equiv j\) (mod 2) then \(\mathbb {P}(Z_j=z)=0\).

Proof

This is simply a version of the local central limit theorem: see for example [18, Proposition 2.5.3 and Corollary 2.5.4]. \(\square \)

Lemma 2

For any \(j\ge 2\) and \(x>0\),

$$\begin{aligned} \mathbb {P}(Z_j \ge x) \le \exp \Big (-\frac{x^2}{2j}\Big ). \end{aligned}$$

Proof

This is an application of a simple Chernoff-style bound. For any \(\lambda >0\),

$$\begin{aligned} \mathbb {P}(Z_j \ge x) \le \mathbb {E}[e^{\lambda Z_j}]e^{-\lambda x} = \mathbb {E}[e^{\lambda X_1}]^j e^{-\lambda x} = \Big (\frac{e^\lambda + e^{-\lambda }}{2}\Big )^j e^{-\lambda x}. \end{aligned}$$

Noting that

$$\begin{aligned} \frac{e^\lambda + e^{-\lambda }}{2} = \sum _{i=0}^\infty \frac{\lambda ^{2i}}{(2i)!} \le \sum _{i=0}^\infty \frac{(\lambda ^2/2)^i}{i!} = e^{\lambda ^2/2}, \end{aligned}$$

we get

$$\begin{aligned} \mathbb {P}(Z_j \ge x) \le \exp \Big (\frac{\lambda ^2 j}{2} - \lambda x\Big ) \end{aligned}$$

and choosing \(\lambda = x/j\) gives the result. \(\square \)

Lemma 3

For any \(z,j\in \mathbb {N}\),

$$\begin{aligned} \mathbb {P}(Z_i > -z \,\,\,\,\forall i=1,\ldots ,j) = \mathbb {P}(Z_j \in [-z+1,z]). \end{aligned}$$

Proof

This is a version of the reflection principle. Note that

$$\begin{aligned}&\mathbb {P}(Z_i> -z \,\,\,\,\forall i=1,\ldots ,j) \\&\quad = \mathbb {P}(Z_i > -z \,\,\,\,\forall i=1,\ldots ,j, \,\, Z_j\ge -z+1)\\&\quad = \mathbb {P}(Z_j\ge -z+1) - \mathbb {P}(\exists i\le j : Z_i \le -z, \,\, Z_j \ge -z+1). \end{aligned}$$

Now by reflecting the random walk at the first hitting time of \(-z\) (applying the strong Markov property), we have

$$\begin{aligned} \mathbb {P}(\exists i\le j : Z_i \le -z, \,\, Z_j \ge -z+1) = \mathbb {P}(Z_j \le -z-1) = \mathbb {P}(Z_j \ge z+1), \end{aligned}$$

which establishes the result. \(\square \)

Corollary 1

For any \(n\ge 1\),

$$\begin{aligned} \mathbb {P}(Z_i > 0 \,\,\,\,\forall i=1,\ldots ,n) \asymp n^{-1/2}. \end{aligned}$$

Proof

We have

$$\begin{aligned} \mathbb {P}(Z_i>0\,\,\,\,\forall i = 1,\ldots ,n)= & {} \mathbb {P}(Z_1 = 1, \, Z_i>0\,\,\,\,\forall i = 2,\ldots ,n)\\= & {} \frac{1}{2} \mathbb {P}_1(Z_i>0\,\,\,\,\forall i=1,\ldots , n-1). \end{aligned}$$

Applying Lemma 3, the above equals \(\frac{1}{2}\mathbb {P}_0(Z_{n-1}\in [0,1])\), and by Lemma 1 this is of order \(n^{-1/2}\). \(\square \)

3 Sketch proofs

For \(t\ge 0\) let \(I_0(t) = 0\), and for \(k\ge 1\) define

$$\begin{aligned} I_k(t) = \min \{i > I_{k-1}(t) : X_i(t)\ne X_i(0)\}, \end{aligned}$$

the kth index for which our Bernoulli random variables disagree at times 0 and t. We think of t being small, so that for many indices i we have \(X_i(t) = X_i(0)\), and we call \(I_k(t)\) the “kth change” (at time t relative to time 0). We call the steps of the random walk between \(0=I_0(t)\) and \(I_1(t)-1\) the first period, the steps between \(I_1(t)\) and \(I_2(t)-1\) the second period, and so on. For each k we let \(J_k(t) = I_k(t)-I_{k-1}(t)\) be the length of the kth period.

Our first key observation is that the increments of \(Z_n(0)\) and \(Z_n(t)\) are equal during odd periods (that is, for \(n\in [I_{2k},I_{2k+1}(t)-1]\)); and the increments of \(Z_n(0)\) and \(-Z_n(t)\) are equal during even periods (that is, for \(n\in [I_{2k+1}(t),I_{2k+2}(t)-1]\)). See Fig. 1.

Fig. 1
figure 1

A realisation of Z(0) in blue and Z(t) in red (dashed) for the first four periods. The dotted green lines mark the lines of reflection (color figure online)

To see why Theorem 2 is true, let \(t=\varepsilon \in (0,1)\) and run the random walks up to step n. Let \(U_n(t)\) be the sum of the increments of \(Z_n(0)\) over odd periods up to step n, and \(V_n(t)\) be the sum of the increments over even periods up to step n. Then clearly

$$\begin{aligned} Z_n(0) = U_n(t) + V_n(t). \end{aligned}$$

(Note that \(U_n(t)\) and \(V_n(t)\) depend on t because the periods depend on t, even though \(Z_n(0)\) itself does not depend on t.) Of course, we can also write \(Z_n(t)\) as the sum of its increments over odd periods, plus the sum of its increments over even periods. But the increments of \(Z_n(t)\) over odd periods are equal to the increments of \(Z_n(0)\) over odd periods, and the increments of \(Z_n(t)\) over even periods are precisely minus the increments of \(Z_n(0)\) over even periods. Thus

$$\begin{aligned} Z_n(t) = U_n(t) - V_n(t). \end{aligned}$$

As a result,

$$\begin{aligned}&\mathbb {P}(Z_n(0)>0 \text { and } Z_n(t)>0) = \mathbb {P}(U_n(t) + V_n(t)>0 \text { and }\\&\quad U_n(t) - V_n(t)>0) = \mathbb {P}(U_n(t)>|V_n(t)|). \end{aligned}$$

Now we note that—as long as \(t\gg 1/n\), so that there are many periods by step n—the quantities \(U_n(t)\) and \(V_n(t)\) have almost the same distribution when n is large, and are almost independent. They are also symmetric and have small probability of being equal or equalling zero. If U and V are independent symmetric continuous random variables, then \(\mathbb {P}(U>|V|)=1/4\). Approximating this statement with \(U_n(t)\) and \(V_n(t)\) in place of U and V gives that

$$\begin{aligned} \mathbb {P}(Z_n(0)>0 \text { and } Z_n(t)>0) \rightarrow 1/4 \end{aligned}$$

as \(n\rightarrow \infty \), which is what is needed to prove Theorem 2 since clearly \(\mathbb {P}(Z_n(0)>0)^2\rightarrow 1/4\).

Theorem 1 is significantly more difficult to prove. We give a sketch of a proof of the existence of exceptional times, whose main ideas are also the key to the most difficult part of calculating the Hausdorff dimension of the set of such times. There will be a much more detailed proof outline in Sect. 5.

It is simpler to deal with \(\mathcal {E}_0\) rather than \(\mathcal {E}\) or \(\mathcal {E}_\alpha \) for much of the proof. We define the event

$$\begin{aligned} P_n(t) = \{Z_k(t)>0 \,\,\,\,\forall k\in \{1,\ldots ,n\}\}, \end{aligned}$$

that the random walk Z(t) is positive for its first n steps, and consider

$$\begin{aligned} \kappa _n = \int _0^1 \mathbb {1}_{P_n(t)} \mathop {}\mathrm {d}t, \end{aligned}$$

the Lebesgue amount of time in [0, 1] during which Z(t) stays positive for its first n steps. To show the existence of exceptional times, ignoring some technical issues, it essentially suffices to show that

$$\begin{aligned} \mathbb {E}[\kappa _n^2] \le C\mathbb {E}[\kappa _n]^2 \end{aligned}$$

for some finite constant C, from which we can deduce that \(\mathbb {P}(\kappa _n > 0) \ge 1/C\) and let \(n\rightarrow \infty \).

For the first moment, by Fubini’s theorem and stationarity,

$$\begin{aligned} \mathbb {E}[\kappa _n] = \int _0^1 \mathbb {P}(P_n(t)) \mathop {}\mathrm {d}t = \int _0^1 \mathbb {P}(P_n(0)) \mathop {}\mathrm {d}t = \mathbb {P}(P_n(0)). \end{aligned}$$

Corollary 1 tells us that \(\mathbb {P}(P_n(0))\asymp n^{-1/2}\).

For the second moment, again applying Fubini’s theorem and stationarity, a simple argument (using Fubini’s theorem and stationarity, and which we will give in full later) gives

$$\begin{aligned} \mathbb {E}[\kappa _n^2] \le 2\int _0^1 \mathbb {P}(P_n(0) \cap P_n(t)) \mathop {}\mathrm {d}t. \end{aligned}$$

Our task is therefore to show that \(\int _0^1 \mathbb {P}(P_n(0)\cap P_n(t)) \lesssim \mathbb {P}(P_n(0))^2 \asymp n^{-1}\).

During the even periods, the increments of Z(0) and Z(t) are mirrored. One can use this to show that the probability that both Z(0) and Z(t) remain positive over an even period is smaller than the square of the probability that Z(0) stays positive over the same period. The total length of the even periods is roughly n/2 provided t is not too small, and so (skipping over several important details) we might hope that, at least when t is not too small,

$$\begin{aligned} \mathbb {P}(P_n(0)\cap P_n(t)) \lesssim \mathbb {P}(Z_{n/2}(0)>0)^2. \end{aligned}$$

The details required to show this involve sewing together the increments over the even periods to create one random walk path of length roughly n/2. It is possible to do this in a very simple and natural way, except for one remaining issue: we cannot ignore the first period, on which the two random walks Z(0) and Z(t) are equal. On this period clearly the best upper bound we can get on the probability that both random walks stay positive is simply \(\mathbb {P}(Z_{I_1(t)-1}(0)>0)\), rather than this quantity squared. A more reasonable overall upper bound is therefore

$$\begin{aligned} \mathbb {P}(P_n(0)\cap P_n(t)) \lesssim \frac{\mathbb {P}(Z_{n/2}(0)>0)^2}{\mathbb {P}(Z_{I_1(t)-1}(0)>0)}. \end{aligned}$$

This does indeed hold, and since \(I_1(t)\approx 2/t\), we have \(\mathbb {P}(Z_{I_1(t)-1}(0)>0)\asymp (2/t)^{-1/2}\), so that

$$\begin{aligned} \int _0^1 \mathbb {P}(P_n(0)\cap P_n(t)) \mathop {}\mathrm {d}t \lesssim \int _0^1 \frac{n^{-1}}{t^{1/2}} \mathop {}\mathrm {d}t \asymp n^{-1} \end{aligned}$$

as required. One may further note that an extra factor of \(t^{-\gamma }\) in the integral would not make any difference to the calculation provided that \(\gamma <1/2\), which combined with Frostman’s lemma essentially gives us the lower bound of 1/2 on the Hausdorff dimension.

4 Proof of Theorem 2: noise sensitivity for \(\{Z_n>0\}\)

Fix a sequence \((\varepsilon _n, n\ge 1)\) with \(\varepsilon _n\in (0,1)\) for all n and \(n\varepsilon _n\rightarrow \infty \). Many of the definitions in this section will depend implicitly on \(\varepsilon _n\). Recall that for \(t\ge 0\) we defined \(I_0(t) = 0\), and for \(k\ge 1\),

$$\begin{aligned} I_k(t) = \min \{i > I_{k-1}(t) : X_i(t)\ne X_i(0)\}, \end{aligned}$$

the start of the \((k+1)\)th period. Let

$$\begin{aligned} K(n) = 2\lfloor n(1-e^{-\varepsilon _n})/4 \rfloor . \end{aligned}$$

We note that, since each \(X_i\) has rerandomised by time \(\varepsilon _n\) with probability \(1-e^{-\varepsilon _n}\), the period length \(I_k(\varepsilon _n)-I_{k-1}(\varepsilon _n)\) is a Geometric random variable of parameter \((1-e^{-\varepsilon _n})/2\). Thus by the law of large numbers we have \(I_{K(n)}(\varepsilon _n) \approx n\).

There will be three main parts to this proof. In the first part, we show that the probability that the sum of the increments of a random walk on the odd periods is larger than the modulus of the sum of the increments on the even periods converges to 1/4. In the second part, we will prove Theorem 2 but with \(I_{K(n)}(\varepsilon _n)\) in place of n. Finally, in the third part, we will transfer from using \(I_{K(n)}(\varepsilon _n)\) to n.

Part 1: Probability that sum of increments on odd periods exceed modulus of sum of increments on even periods converges to 1/4.

Define

$$\begin{aligned} U_n = \sum _{i=1}^{I_1(\varepsilon _n) - 1} X_i + \sum _{i=I_2(\varepsilon _n)}^{I_3(\varepsilon _n) - 1} X_i + \cdots + \sum _{i=I_{K(n)-2}(\varepsilon _n)}^{I_{K(n)-1}(\varepsilon _n) -1}X_i + X_{I_{K(n)}(\varepsilon _n)} \end{aligned}$$

and

$$\begin{aligned} V_n = \sum _{i=I_1(\varepsilon _n)}^{I_2(\varepsilon _n) - 1} X_i + \sum _{i=I_3(\varepsilon _n)}^{I_4(\varepsilon _n) - 1} X_i + \cdots + \sum _{i=I_{K(n)-1}(\varepsilon _n)}^{I_{K(n)}(\varepsilon _n) -1}X_i. \end{aligned}$$

In words, \(U_n\) is the sum of the increments of a simple symmetric random walk (in fact Y, though this is not important) over the odd periods up to step roughly n, and \(V_n\) is the sum over the even periods up to step roughly n. This is, of course, not quite true, since \(I_{K(n)}(\varepsilon _n)\) is unlikely to be exactly n. On the positive side, this gives \(U_n\) and \(V_n\) some nice properties: in particular, they are identically distributed.

We claim that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathbb {P}(U_n + V_n> 0 \text { and } U_n - V_n > 0 ) = 1/4. \end{aligned}$$

To see this, we observe that

$$\begin{aligned} 1&= \mathbb {P}(U_n>V_n>0) + \mathbb {P}(U_n>-V_n>0) + \mathbb {P}(V_n>U_n>0) + \mathbb {P}(-V_n>U_n>0)\\&\quad + \mathbb {P}(U_n<V_n<0) + \mathbb {P}(U_n<-V_n<0) + \mathbb {P}(V_n<U_n<0) + \mathbb {P}(-V_n<U_n<0)\\&\quad + \mathbb {P}(U_n = 0 \text { or } V_n = 0 \text { or } U_n=V_n \text { or } U_n=-V_n). \end{aligned}$$

The first eight terms are all equal, and the last tends to 0 as \(n\rightarrow \infty \). Thus

$$\begin{aligned}&\mathbb {P}(U_n + V_n> 0 \text { and } U_n - V_n> 0 ) \\&\quad = \mathbb {P}( U_n> |V_n| )\\&\quad = \mathbb {P}( U_n> V_n> 0 ) + \mathbb {P}( U_n> -V_n> 0 ) + \mathbb {P}(U_n>V_n=0)\\&\quad \rightarrow 1/8 + 1/8 + 0 = 1/4 \end{aligned}$$

as claimed.

Part 2: Proving Theorem 2but with \(I_{K(n)}(\varepsilon _n)\)in place of n.

Noting that K(n) is even, we now let

$$\begin{aligned} U'_n= & {} Z_{I_1(\varepsilon _n)-1}(0) + \sum _{\begin{array}{c} k=3\\ k\text { odd} \end{array}}^{K(n)-1} \big (Z_{I_k(\varepsilon _n)-1}(0)\\&-Z_{I_{k-1}(\varepsilon _n)-1}(0)\big ) + Z_{I_{K(n)}(\varepsilon _n)}(0) - Z_{I_{K(n)}(\varepsilon _n)-1}(0) \end{aligned}$$

and

$$\begin{aligned} V'_n = \sum _{\begin{array}{c} k=2\\ k\text { even} \end{array}}^{K(n)} (Z_{I_k(\varepsilon _n)-1}(0)-Z_{I_{k-1}(\varepsilon _n)-1}(0)). \end{aligned}$$

Clearly we have \(Z_{I_{K(n)}(\varepsilon _n)}(0) = U'_n + V'_n\). Moreover, since the increments of \(Z(\varepsilon _n)\) and Z(0) are equal on odd periods and mirrored on even periods, we have

$$\begin{aligned} Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n) = U'_n - V'_n. \end{aligned}$$

Thirdly, note that (again recalling that K(n) is even) \(U'_n\) and \(V'_n\) have the same joint distribution as \(U_n\) and \(V_n\). Thus we have

$$\begin{aligned} \mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0)> 0 \text { and } Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n)> 0)&= \mathbb {P}( U'_n + V'_n> 0 \text { and } U'_n - V'_n> 0 )\\&= \mathbb {P}( U_n + V_n> 0 \text { and } U_n - V_n > 0 ) \end{aligned}$$

which we have just shown (in Part 1) converges to 1/4 as \(n\rightarrow \infty \). Thus

$$\begin{aligned} \mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0)> 0 \text { and } Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n)> 0) - \mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0) > 0)^2 \rightarrow \frac{1}{4} - \Big (\frac{1}{2}\Big )^2 = 0, \end{aligned}$$

establishing the theorem with \(I_{K(n)}(\varepsilon _n)\) in place of n.

We remark here that so far, the proof works for any value of \(\varepsilon _n\in (0,1)\). However, if \(\varepsilon _n\) is too small, then the value of K(n) is not large, which will cause problems in the following.

Part 3: Transferring from \(I_{K(n)}(\varepsilon _n)\)to n.

We claim that

$$\begin{aligned}&\mathbb {P}( Z_n(0)> 0 \text { and } Z_n(\varepsilon _n)>0 ) = \mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0)> 0 \text { and }\nonumber \\&\quad Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n) > 0 ) + o(1). \end{aligned}$$
(3)

We will use the elementary bounds, for any events A, B, \(A'\) and \(B'\),

$$\begin{aligned} \mathbb {P}(A\cap B) \le \mathbb {P}(A'\cap B') + \mathbb {P}(A\setminus A') + \mathbb {P}(B\setminus B') \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}(A\cap B) \ge \mathbb {P}(A'\cap B') - \mathbb {P}(A'\setminus A) - \mathbb {P}(B'\setminus B). \end{aligned}$$

For the upper bound, using the first fact above,

$$\begin{aligned}&\mathbb {P}( Z_n(0)> 0 \text { and } Z_n(\varepsilon _n)>0 ) \le \mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0)> 0 \text { and }\\&\quad Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n)> 0 ) + \mathbb {P}(Z_n(0)> 0 \text { but } \\&\quad Z_{I_{K(n)}(\varepsilon _n)}(0) \le 0) + \mathbb {P}(Z_n(\varepsilon _n) > 0 \text { but } Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n) \le 0), \end{aligned}$$

and for the lower bound, using the second fact above,

$$\begin{aligned}&\mathbb {P}( Z_n(0)> 0 \text { and } Z_n(\varepsilon _n)>0 )\ge \mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0)> 0 \text { and } \\&\quad Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n)> 0 ) - \mathbb {P}(Z_n(0) \le 0 \text { but } \\&\quad Z_{I_{K(n)}(\varepsilon _n)}(0)> 0) - \mathbb {P}(Z_n(\varepsilon _n) \le 0 \text { but } Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n) > 0). \end{aligned}$$

We will show that

$$\begin{aligned} \mathbb {P}(Z_n(0) > 0 \text { but } Z_{I_{K(n)}(\varepsilon _n)}(0) \le 0)\rightarrow 0; \end{aligned}$$

the three other similar terms can be dealt with similarly. To do this, we first note that for any \(x_n,y_n>0\),

$$\begin{aligned} \mathbb {P}\big (Z_n(0)> 0 \text { but } Z_{I_{K(n)}(\varepsilon _n)}(0) \le 0\big )&\le \mathbb {P}\big ( |I_{K(n)}(\varepsilon _n) - n| > x_n \big ) + \mathbb {P}\big ( Z_n(0) \in (0,y_n) \big )\nonumber \\&\quad + \mathbb {P}\Big ( Z_n(0) \ge y_n \text { but } \min _{j\in [n-x_n,n+x_n]} Z_j(0) \le 0 \Big ). \end{aligned}$$
(4)

We first consider \(\mathbb {P}( |I_{K(n)}(\varepsilon _n) - n| > x_n)\). We use Markov’s inequality to see that

$$\begin{aligned} \mathbb {P}\big ( |I_{K(n)}(\varepsilon _n) - n| > x_n \big ) \le \frac{\mathbb {E}\big [|I_{K(n)}(\varepsilon _n) - n|^2\big ]}{x_n^2}, \end{aligned}$$

and using the fact that \(I_{K(n)}(\varepsilon _n)\) is a sum of K(n) independent Geometric random variables of parameter \((1-e^{-\varepsilon _n})/2\), we have

$$\begin{aligned} \mathbb {E}\big [|I_{K(n)}(\varepsilon _n) - n|^2\big ]&= \text {Var}(I_{K(n)}(\varepsilon _n)) + \mathbb {E}[I_{K(n)}(\varepsilon _n)]^2 - 2n\mathbb {E}[I_{K(n)}(\varepsilon _n)] + n^2\\&= \frac{2K(n)(1+e^{-\varepsilon _n})}{(1-e^{-\varepsilon _n})^2} + \frac{4K(n)^2}{(1-e^{-\varepsilon _n})^2} - \frac{4nK(n)}{1-e^{-\varepsilon _n}} + n^2. \end{aligned}$$

Recalling that \(K(n) = 2\lfloor n(1-e^{-\varepsilon _n})/4 \rfloor \), the above is at most

$$\begin{aligned} \frac{n(1+e^{-\varepsilon _n})}{1-e^{-\varepsilon _n}} + n^2 - \Big (\frac{8n}{1-e^{-\varepsilon _n}}\Big )\Big (\frac{n(1-e^{-\varepsilon _n})}{4}-1\Big ) + n^2 \le \frac{10n}{1-e^{-\varepsilon _n}}. \end{aligned}$$

Thus

$$\begin{aligned} \mathbb {P}\big ( |I_{K(n)}(\varepsilon _n) - n| > x_n \big ) \le \frac{10n}{x_n^2(1-e^{-\varepsilon _n})}. \end{aligned}$$

Choosing the value \(x_n = n^{5/8}/(1-e^{-\varepsilon _n})^{3/8}\), we have

$$\begin{aligned} \mathbb {P}\big ( |I_{K(n)}(\varepsilon _n) - n| > x_n \big ) \le \frac{10}{n^{1/4}(1-e^{-\varepsilon _n})^{1/4}} \rightarrow 0 \end{aligned}$$
(5)

by our assumption that \(n\varepsilon _n\rightarrow \infty \).

We now move on to the second term on the right-hand side of (4). Choosing \(y_n = n^{3/8}/\varepsilon _n^{1/8}\), since \((Z_j(0), j\ge 0)\) is a simple symmetric random walk and \(y_n\ll n^{1/2}\), by the central limit theorem we have

$$\begin{aligned} \mathbb {P}\big ( Z_n(0) \in (0,y_n) \big ) \rightarrow 0. \end{aligned}$$
(6)

For the final term in (4), by the strong Markov property and Lemma 3,

$$\begin{aligned} \mathbb {P}\Big ( Z_n(0) \ge y_n \text { but } \min _{j\in [n-x_n,n+x_n]} Z_j(0) \le 0 \Big )&\le \mathbb {P}_0\Big (\max _{j\in [0,x_n]} Z_j(0) \ge y_n\Big ) \\&\quad + \mathbb {P}_{y_n}\Big (\min _{j\in [0,x_n]} Z_j(0) \le 0\Big )\\&= 2\big (1-\mathbb {P}(Z_{\lfloor x_n\rfloor }(0)\in [-y_n+1,y_n])\big ). \end{aligned}$$

Since \(x_n = n^{5/8}/(1-e^{-\varepsilon _n})^{3/8} \ll n^{6/8}/\varepsilon _n^{2/8} = y_n^2\), the central limit theorem tells us that the above also converges to zero as \(n\rightarrow \infty \). Combining this with (5) and (6), we see from (4) that

$$\begin{aligned} \mathbb {P}\big (Z_n(0) > 0 \text { but } Z_{I_{K(n)}(\varepsilon _n)}(0) \le 0\big )\rightarrow 0. \end{aligned}$$

This, together with very similar bounds on the other three terms mentioned above, establishes (3). In Part 2 we showed that

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}(Z_{I_{K(n)}(\varepsilon _n)}(0)> 0 \text { and } Z_{I_{K(n)}(\varepsilon _n)}(\varepsilon _n) > 0 ) = 1/4, \end{aligned}$$

and clearly \(\mathbb {P}(Z_n(0)>0)\rightarrow 1/2\), so the proof of Theorem 2 is complete.

5 Outline of the proof of Theorem 1: Hausdorff dimension of exceptional times is 1/2

We now outline the main steps in turning the heuristic in Sect. 3 into a rigorous proof that the Hausdorff dimension of

$$\begin{aligned} \mathcal {E}_\alpha = \Big \{t\in [0,1] : \liminf _{n\rightarrow \infty } \frac{Z_n(t)}{n^\alpha }>0\Big \} \end{aligned}$$

is 1/2 almost surely for any \(\alpha \in [0,1/2)\). Since \(\mathcal {E}_\alpha \subset \mathcal {E}_0\) for any \(\alpha \ge 0\), it suffices to give an upper bound on the dimension of \(\mathcal {E}_0\) and a lower bound on the dimension of \(\mathcal {E}_\alpha \) for \(\alpha \in (0,1/2)\). This also, of course, implies that \(\mathcal {E}\) is non-empty almost surely and therefore that there exist exceptional times of transience. We will proceed by stating a series of results, whose proofs we delay until later sections.

5.1 Lower bound on Hausdorff dimension of \(\mathcal {E}_\alpha \)

As in the sketch proof, we define the event

$$\begin{aligned} P_n(t) = \{Z_i(t)>0 \,\,\,\,\forall i=1,\ldots ,n\} \end{aligned}$$

that the random walk Z(t) is positive up to step n, and similarly

$$\begin{aligned} P_n = \{Z_i>0 \,\,\,\,\forall i=1,\ldots ,n\}. \end{aligned}$$

We will use these events for much of the proof. However, to consider \(\mathcal {E}_\alpha \) for \(\alpha >0\), we will also need the more complicated events

$$\begin{aligned} P^\alpha _n(t) = \big \{Z_i(t)\ge i^\alpha \,\,\,\,\forall i=1,\ldots ,n\big \} \end{aligned}$$

that the random walk Z(t) remains above the curve \(i^\alpha \) for all steps \(i\le n\), and similarly for \(P^\alpha _n\). Here we could consider any \(\alpha \ge 0\), though we will mostly think of \(\alpha \in [0,1/2)\). Note that \(P^0_n(t)=P_n(t)\).

Let

$$\begin{aligned} T^\alpha _n = \{t\in [0,1] : P^\alpha _n(t) \text { holds}\} \end{aligned}$$

be the set of times at which the random walk stays above the curve \(i^\alpha \) up to step n. We write \(\bar{T}^\alpha _n\) for the closure of \(T^\alpha _n\) and \(T^\alpha =\bigcap _n T^\alpha _n\). Finally define, for \(\gamma \in [0,1)\),

$$\begin{aligned} \Phi ^\alpha _n(\gamma ) = \frac{1}{\mathbb {P}(P^\alpha _n)^2}\int _0^1 \int _0^1 \frac{\mathbb {1}_{P^\alpha _n(s)\cap P^\alpha _n(t)}}{|t-s|^\gamma } \mathop {}\mathrm {d}s \mathop {}\mathrm {d}t. \end{aligned}$$

Our lower bound on the Hausdorff dimension of \(\mathcal {E}_\alpha \) will be based on the following corollary of [20, Lemma 6.2], which in turn is an application of Frostman’s lemma.

Lemma 4

Suppose that for some \(\alpha \ge 0\) and \(\gamma \in (0,1)\) we have

$$\begin{aligned} \sup _n \mathbb {E}[\Phi ^\alpha _n(\gamma )]<\infty . \end{aligned}$$

Then the Hausdorff dimension of \(\bigcap _n \bar{T}^\alpha _n\) is at least \(\gamma \) with strictly positive probability.

Given Lemma 4, which we will prove in Sect. 8, our main task in proving the lower bound becomes to show that \(\mathbb {E}[\Phi ^\alpha _n(\gamma )]\) is bounded above for each \(\alpha ,\gamma <1/2\). This will be the most difficult (and most novel) part of our proof, and will be carried out in Sect. 6.

Proposition 1

For any \(\alpha ,\gamma \in [0,1/2)\),

$$\begin{aligned} \sup _n \mathbb {E}[\Phi ^\alpha _n(\gamma )]<\infty . \end{aligned}$$

Combining Lemma 4 and Proposition 1 tells us that for any \(\alpha ,\gamma \in [0,1/2)\), the Hausdorff dimension of \(\bigcap _n \bar{T}^\alpha _n\) is at least \(\gamma \) with strictly positive probability. This is not quite what was promised in Theorem 1, which in fact says that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is 1/2 almost surely for any \(\alpha \in [0,1/2)\). Moving from \(\bigcap _n \bar{T}^\alpha _n\) to \(T^\alpha \) is a technicality that can be handled in basically the same way as [14, Lemma 3.2]; and of course \(T^\alpha \subset \mathcal {E}_\alpha \). Finally, showing that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is at least 1/2 almost surely, rather than with positive probability, follows from standard ergodicity arguments (of course this cannot hold for \(T^\alpha \), since with positive probability \(Z_2(t)=0\) for all \(t\in [0,1]\)). The following lemmas take care of these steps. We will prove them in Sect. 8.

Lemma 5

For any \(\alpha \ge 0\), we have

$$\begin{aligned} \bigcap _{n=1}^\infty \bar{T}^\alpha _n = \bigcap _{n=1}^\infty T^\alpha _n \end{aligned}$$

almost surely.

Lemma 6

For each \(\alpha \ge 0\), the Hausdorff dimension of \(\mathcal {E}_\alpha \) is a constant (possibly depending on \(\alpha \)) almost surely.

5.2 Upper bound on Hausdorff dimension of \(\mathcal {E}_0\)

The following definitions are more or less standard in the noise sensitivity literature. For a function \(f:\{-1,1\}^\mathbb {N}\rightarrow \mathbb {R}\) and random variables \(X_1,X_2,\ldots \) taking values in \(\{-1,1\}\), we say that \(m\in \mathbb {N}\) is pivotal for f if

$$\begin{aligned}&f(X_1,\ldots ,X_{m-1},X_m,X_{m+1},X_{m+2},\ldots )\\&\quad \ne f(X_1,\ldots ,X_{m-1},-X_m,X_{m+1},X_{m+2},\ldots ). \end{aligned}$$

Of course this definition depends on the realisation of \(X_1,X_2,\ldots \), although we note that it is independent of the value of \(X_m\in \{-1,1\}\). For an event E, we say that m is pivotal for E if m is pivotal for the indicator function of E. We define the influence of the mth bit (on E) to be

$$\begin{aligned} \mathcal I_m(E) = \mathbb {P}(m \text { is pivotal for } E) \end{aligned}$$

and the total influence of E to be

$$\begin{aligned} \mathcal I(E) = \sum _{m=1}^\infty \mathcal I_m(E). \end{aligned}$$

For technical reasons, we will need the following generalisations of \(P_n\) and T. For \(k\in 2\mathbb {Z}_+\), define the event

$$\begin{aligned} P_{k,n} = \{Z_k=0, \, Z_i > 0 \,\,\forall i=k+1,\ldots ,k+n\} \end{aligned}$$

that Z is zero at step k and positive for the next n steps, and let

$$\begin{aligned} T'_k = \{t\in [0,1] : Z_k(t)=0,\, Z_i(t)>0 \,\,\,\,\forall i=k+1,k+2,\ldots \} \end{aligned}$$

be the set of times at which \(Z_k(t)\) is zero and \(Z_i(t)\) is strictly positive from step \(k+1\) onwards.

Our next lemma is just a rephrasing of [20, Theorem 8.1] into our setting, and gives us a condition for bounding the Hausdorff dimension of \(T'_k\) in terms of the total influence of \(P_{k,n}\).

Lemma 7

The Hausdorff dimension of \(T'_k\) is almost surely at most

$$\begin{aligned} \liminf _{n\rightarrow \infty } \Big (1-\frac{\log \mathbb {P}(P_{k,n})}{\log \mathcal I(P_{k,n})}\Big )^{-1}. \end{aligned}$$

Proof

This is almost exactly the second part of the statement of [20, Theorem 8.1] translated into our notation. There is an extra condition that the events \(P_{k,n}\) must depend only on finitely many random variables, but this is clearly satisfied since \(P_{k,n}\) depends only on \(X_1,\ldots ,X_{n+k}\). \(\square \)

To implement Lemma 7 we now need an upper bound on the influences of \(P_n\).

Proposition 2

For any \(m=1,2,\ldots ,n\), we have

$$\begin{aligned} \mathcal I_m(P_n) \asymp \frac{n-m+1}{n^{3/2}}. \end{aligned}$$

This result will be proved in Sect. 7. Combining Proposition 2 with Lemma 7 will give us the upper bound of 1/2 on the Hausdorff dimension of \(T^0\) and hence \(\mathcal {E}\). We carry out the details in Sect. 5.4.

5.3 \(\mathcal {E}_\alpha \) is empty for \(\alpha >1/2\)

The final part of Theorem 1 says that \(\mathcal {E}_\alpha \) is empty almost surely when \(\alpha >1/2\). The proof of this fact follows a fairly standard argument. For \(\alpha ,t\ge 0\) and \(n\in \mathbb {N}\) define the event \(L^\alpha _n(t) = \{Z_n(t) \ge n^\alpha \}\), and for \(k\in \mathbb {N}\) let \(\mathcal L^\alpha _n(k) = \int _0^k \mathbb {1}_{L^\alpha _n(t)} \mathop {}\mathrm {d}t\). Note that

$$\begin{aligned} \mathbb {P}(\mathcal L^\alpha _n(1)> 0) \le \mathbb {P}(\mathcal L^\alpha _n(1)> 0)\frac{\mathbb {E}[\mathcal L^\alpha _n(2)]}{\mathbb {E}[\mathcal L^\alpha _n(2) \mathbb {1}_{\{\mathcal L^\alpha _n(1)> 0\}}]} = \frac{\mathbb {E}[\mathcal L^\alpha _n(2)]}{\mathbb {E}[\mathcal L^\alpha _n(2) \,|\, \mathcal L^\alpha _n(1) > 0]}. \end{aligned}$$
(7)

By Fubini’s theorem and stationarity,

$$\begin{aligned} \mathbb {E}[\mathcal L^\alpha _n(2)] = \int _0^2 \mathbb {P}(Z_n(t) \ge n^\alpha ) \mathop {}\mathrm {d}t = 2\mathbb {P}(Z_n \ge n^\alpha ). \end{aligned}$$

By Markov’s inequality, for any \(\lambda >0\),

$$\begin{aligned} \mathbb {P}(Z_n \ge n^\alpha ) = \mathbb {P}(\exp (\lambda Z_n) \ge \exp (\lambda n^\alpha )) \le \mathbb {E}[\exp (\lambda Z_n)]\exp (-\lambda n^{\alpha }). \end{aligned}$$

Since \(Z_n\) is a sum of n independent and identically distributed random variables,

$$\begin{aligned} \mathbb {E}[\exp (\lambda Z_n)] = \mathbb {E}[\exp (\lambda Z_1)]^n = (e^\lambda /2 + e^{-\lambda }/2)^n. \end{aligned}$$

When \(\lambda \le 1\) we have \(e^\lambda /2 + e^{-\lambda }/2 \le 1+3\lambda ^2/4\), so fixing \(\alpha \in (1/2,1]\) and choosing \(\lambda = n^{\alpha -1}\), we have

$$\begin{aligned} \mathbb {E}[\exp (\lambda Z_n)] \le \Big (1+\frac{3}{4}\lambda ^2\Big )^n = \Big (1 + \frac{3}{4}n^{2\alpha -2}\Big )^n \le \exp \Big (\frac{3}{4}n^{2\alpha -1}\Big ). \end{aligned}$$

Thus, again with \(\alpha \in (1/2,1]\) and \(\lambda = n^{\alpha -1}\),

$$\begin{aligned} \mathbb {E}[\mathcal L^\alpha _n(2)] = 2\mathbb {P}(Z_n \ge n^\alpha ) \le 2\exp \Big (\frac{3}{4}n^{2\alpha -1}\Big )\exp (-n^{2\alpha -1}) = 2\exp (-n^{2\alpha -1}/4).\nonumber \\ \end{aligned}$$
(8)

On the other hand, letting \(T = \inf \{t\ge 0 : Z_n(t) \ge n^\alpha \}\), we have

$$\begin{aligned} \mathbb {E}[\mathcal L^\alpha _n(2) \,|\, \mathcal L^\alpha _n(1)> 0] \ge \mathbb {E}\Big [\int _T^{T+1} \mathbb {1}_{L^\alpha _n(t)} \mathop {}\mathrm {d}t \,\Big |\, \mathcal L^\alpha _n(1) > 0\Big ]. \end{aligned}$$

Let \(T' = \inf \{t\ge T : \text {one of the first n steps rerandomises}\}\). Then clearly, provided \(T<\infty \),

$$\begin{aligned} \int _T^{T+1} \mathbb {1}_{L^\alpha _n(t)} \mathop {}\mathrm {d}t \ge (T'-T)\wedge 1. \end{aligned}$$

However, by the strong Markov property, \(T'-T\) is exponentially distributed with parameter n. Thus

$$\begin{aligned} \mathbb {E}\Big [\int _T^{T+1} \mathbb {1}_{L^\alpha _n(t)} \mathop {}\mathrm {d}t \,\Big |\, \mathcal {F}_T \Big ]\ge & {} \mathbb {E}[(T'-T)\wedge 1] = \int _0^1 s\cdot ne^{-ns} \mathop {}\mathrm {d}s \ge \int _0^{1/n} ns e^{-ns} \mathop {}\mathrm {d}s \\\ge & {} \frac{1}{2en}, \end{aligned}$$

and therefore

$$\begin{aligned} \mathbb {E}[\mathcal L^\alpha _n(2) \,|\, \mathcal L^\alpha _n(1) > 0] \ge \frac{1}{2en}. \end{aligned}$$

Combining this with (7) and (8), for any \(\alpha \in (1/2,1]\) we have

$$\begin{aligned} \mathbb {P}(\mathcal L^\alpha _n(1) > 0) \le 2\exp (-n^{2\alpha -1}/4)\cdot 2en. \end{aligned}$$

By the Borel-Cantelli lemma, for any \(\alpha \in (1/2,1]\), the probability that for infinitely many n, there exists a time in [0, 1] such that \(L^\alpha _n(t)\) occurs, is zero. Thus \(\mathcal {E}_\alpha \) is empty almost surely for \(\alpha \in (1/2,1]\). The same is trivially true for \(\alpha >1\).

5.4 Completing the proof of Theorem 1

We now tie together the results from Sects. 5.1, 5.2 and 5.3 to complete the proof of Theorem 1.

Proof of Theorem 1

We showed in Sect. 5.3 that \(\mathcal {E}_\alpha \) is empty almost surely for \(\alpha >1/2\), so it remains to show that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is 1/2 for any \(\alpha \in [0,1/2)\). As stated at the beginning of Sect. 5, it suffices to show that the Hausdorff dimension of \(\mathcal {E}_\alpha \) is at least 1/2 for \(\alpha >0\) and the Hausdorff dimension of \(\mathcal {E}_0\) is at most 1/2.

By Lemma 4 and Proposition 1, we know that for any \(\alpha ,\gamma \in [0,1/2)\), the Hausdorff dimension of \(\bigcap _n \bar{T}^\alpha _n\) is at least \(\gamma \) with strictly positive probability. By Lemma 5, the same holds for \(T^\alpha \), and since \(T^\alpha \subset \mathcal {E}_\alpha \), the same holds for \(\mathcal {E}_\alpha \). Lemma 6 then tells us that the Hausdorff dimension of \(\mathcal {E}_\alpha \) must be at least 1/2 almost surely.

Moving on to the upper bound, take \(k\in 2\mathbb {Z}_+\) and \(m\in \{k+1,k+2,\ldots ,k+n\}\). If \(Z_k\ne 0\) then m cannot be pivotal for \(P_{k,n}\), so

$$\begin{aligned}&\mathcal I_m(P_{k,n}) = \mathbb {P}(Z_k=0, \,\, m \text { is pivotal for } P_{k,n}) \\&\quad = \mathbb {P}(Z_k=0)\mathbb {P}(m \text { is pivotal for } P_{k,n}\,|\,Z_k=0). \end{aligned}$$

But by the Markov property,

$$\begin{aligned} \mathbb {P}(m \text { is pivotal for } P_{k,n}\,|\,Z_k=0) = \mathbb {P}(m-k \text { is pivotal for } P_n) = \mathcal I_{m-k}(P_n). \end{aligned}$$

Thus

$$\begin{aligned} \mathcal I(P_{k,n}) = \sum _{m=1}^{k} \mathcal I_m(P_{k,n}) + \sum _{m=k+1}^{k+n} \mathcal I_m(P_{k,n}) \le k + \mathbb {P}(Z_k=0) \sum _{m=1}^n \mathcal I_m(P_n), \end{aligned}$$

and so, applying Proposition 2,

$$\begin{aligned} \mathcal I(P_{k,n}) \lesssim k+ \frac{\mathbb {P}(Z_k=0)}{n^{3/2}}\sum _{m=1}^n (n-m+1) \asymp k+\mathbb {P}(Z_k=0) n^{1/2}. \end{aligned}$$
(9)

By the Markov property

$$\begin{aligned} \mathbb {P}(P_{k,n})= & {} \mathbb {P}(Z_k=0)\mathbb {P}(Z_i>0 \,\,\forall i=k+1,k+2,\ldots ,k+n \,|\,Z_k=0) \\= & {} \mathbb {P}(Z_k=0)\mathbb {P}(P_n), \end{aligned}$$

and by Corollary 1 we have \(\mathbb {P}(P_n)\asymp n^{-1/2}\). Combining this with (9), we see that there exist constants \(c,c'\in (0,\infty )\) such that

$$\begin{aligned} \frac{-\log \mathbb {P}(P_{k,n})}{\log \mathcal I(P_{k,n})} \ge \frac{\frac{1}{2} \log n - \log c - \log \mathbb {P}(Z_k=0)}{\frac{1}{2} \log n + \log c' + \log (\mathbb {P}(Z_k=0)+kn^{-1/2})}, \end{aligned}$$

which converges to 1 as \(n\rightarrow \infty \) for each fixed k. From Lemma 7 we obtain that the Hausdorff dimension of \(T_k'\) is almost surely at most \((1+1)^{-1} = 1/2\).

Finally,

$$\begin{aligned} \mathcal {E}_0 = \{t\in [0,1] : \liminf _{n\rightarrow \infty } Z_n(t)>0 \} = \bigcup _k T_k' \end{aligned}$$

which as a countable union of sets of Hausdorff dimension at most 1/2 almost surely, itself has Hausdorff dimension at most 1/2 almost surely. This completes the proof. \(\square \)

6 Proof of Proposition 1: bounding \(\mathbb {E}[\Phi ^\alpha _n(\gamma )]\) from above

First note that, by Fubini’s theorem,

$$\begin{aligned} \mathbb {E}[ \Phi ^\alpha _n(\gamma )]&= \frac{1}{\mathbb {P}(P_n^\alpha )^2}\mathbb {E}\Big [\int _0^1 \int _0^1 \frac{\mathbb {1}_{P_n^\alpha (s)\cap P_n^\alpha (t)}}{|t-s|^\gamma } \mathop {}\mathrm {d}s\, \mathop {}\mathrm {d}t\Big ]\\&= \frac{1}{\mathbb {P}(P_n^\alpha )^2}\int _0^1 \int _0^1 \frac{\mathbb {P}(P_n^\alpha (s) \cap P_n^\alpha (t))}{|t-s|^\gamma } \mathop {}\mathrm {d}s \, \mathop {}\mathrm {d}t. \end{aligned}$$

By stationarity, this is bounded above by

$$\begin{aligned} \frac{2}{\mathbb {P}(P_n^\alpha )^2} \int _0^1 \frac{\mathbb {P}(P_n^\alpha (0) \cap P_n^\alpha (t))}{t^\gamma } \mathop {}\mathrm {d}t, \end{aligned}$$

and since \(P_n^\alpha (u) \subset P_n(u)\) for any \(\alpha ,u\ge 0\), this is at most

$$\begin{aligned} \frac{2}{\mathbb {P}(P_n^\alpha )^2} \int _0^1 \frac{\mathbb {P}(P_n(0) \cap P_n(t))}{t^\gamma } \mathop {}\mathrm {d}t. \end{aligned}$$

The following lemma says that the probability of \(P_n^\alpha \) is of the same order as the probability as \(P_n\). It is a simple application of [19, Theorem 2] and we will prove it later in this section.

Lemma 8

For any \(\alpha <1/2\),

$$\begin{aligned} \mathbb {P}(P_n^\alpha )\asymp \frac{1}{\sqrt{n}}. \end{aligned}$$

We now want to bound \(\mathbb {P}(P_n(0) \cap P_n(t))\). As suggested in the sketch proof in Sect. 3, the main idea is that on even periods two mirrored random walks (representing the walk at time 0 and time t) must both be larger than 0. The difficulty is in handling the dependencies between periods, and for this we need some more definitions. We recall first that \(I_0(t)=0\) and for \(j\ge 1\)

$$\begin{aligned} I_j(t) = \min \{i > I_{j-1}(t) : X_i(t)\ne X_i(0)\}, \end{aligned}$$

the jth index for which our Bernoulli random variables disagree at times 0 and t. We call the steps between \(I_{j-1}(t)\) and \(I_j(t)-1\) the “jth period”, and let \(J_j(t) = I_j(t)-I_{j-1}(t)\) be the length of the jth period.

For each \(j\ge 1\), define the event

$$\begin{aligned} A_j(t) = \{Z_i(0)>0 \text { and } Z_i(t)>0 \,\,\,\, \forall i\in [I_{j-1}(t), I_j(t)-1]\}, \end{aligned}$$

which says that our dynamical random walk is positive throughout the jth period at both time 0 and time t. For each \(i\ge 0\), let

$$\begin{aligned} W_i(t) = \frac{Z_i(0)+Z_i(t)}{2}, \end{aligned}$$

the average of the two walks Z(0) and Z(t). Note that, for each t, during odd periods the increments of \(W_i(t)\) are equal to the increments of \(Z_i(0)\); and during even periods, \(W_i(t)\) is constant. (When we talk about increments we mean as i changes, keeping t fixed.)

When j is odd, define the event

$$\begin{aligned} A'_j(t) = \{W_i(t)>0 \,\,\,\, \forall i\in [I_{j-1}(t),I_j(t)-1]\} \end{aligned}$$

that W(t) is positive throughout the jth period. Note that, since \(W_i(t)\) is the average of \(Z_i(0)\) and \(Z_i(t)\), if both of these are positive, then so is \(W_i(t)\). That is, if j is odd, then \(A_j(t) \subset A'_j(t)\).

Making the same comparison when j is even would not be useful since W is constant. Instead, when j is even, let \(B^{(j)}_i(t)\), \(i\ge 0\) be an independent simple random walk started from \(W_{I_{j-1}(t)-1}(t)\) and define

$$\begin{aligned} A'_j(t) = \{B^{(j)}_i(t) \in (0,2W_{I_{j-1}(t)-1}(t)) \,\,\,\, \forall i\in [1,J_j(t)]\}. \end{aligned}$$

Figure 2 shows a realisation of Z(0), Z(t), W(t), \(B^{(2)}(t)\) and \(B^{(4)}(t)\).

Fig. 2
figure 2

A realisation of Z(0) and Z(t) (blue/red), W(t) (black), \(B^{(2)}(t)\) and \(B^{(4)}(t)\) (both green) for the first four periods (color figure online)

We need to rule out some unlikely events. Let

$$\begin{aligned} E^{\text {odd}}_n(t)= \{J_3(t) + J_5(t) + \ldots + J_{2\lfloor nt/8\rfloor +1}(t) \ge n/8\}, \end{aligned}$$

which we think of as the event that the odd periods (not including the first) are not too short,

$$\begin{aligned} E^{\text {even}}_n(t)= \{J_2(t) + J_4(t) + \ldots + J_{2\lfloor nt/8\rfloor }(t) \ge n/8\}, \end{aligned}$$

which we think of as the event that the even periods are not too short,

$$\begin{aligned} E_n(t) = E^{\text {odd}}_n(t)\cap E^{\text {even}}_n(t)\end{aligned}$$

the event that both the odd and even periods are not too short, and

$$\begin{aligned} E'_n(t) = \{I_{2\lfloor nt/8\rfloor +1}(t)\le n\}, \end{aligned}$$

the event that we have at least \(2\lfloor nt/8\rfloor +1\) periods before step n.

We note that for each j, when t is small \(J_j(t)\) has expectation roughly 2/t, so when n is large the above events should all occur with probability close to 1. The following lemma, which we prove later in the section, quantifies this more precisely.

Lemma 9

There exists a constant \(\delta >0\) such that for any \(t\in [0,1]\) and \(n\in \mathbb {N}\),

$$\begin{aligned} \mathbb {P}(E_n(t)^c) + \mathbb {P}(E'_n(t)^c) \le \exp (-\delta nt). \end{aligned}$$

For now we will work on the event \(E_n(t)\). Also define, for \(k\in \mathbb {N}\),

$$\begin{aligned} V_k(t) = \bigcap _{j=1}^k A_j(t) \,\,\,\, \text { and } V'_k(t) = \bigcap _{j=1}^k A'_j(t). \end{aligned}$$

Our next result translates the probability that we want to bound, which is that of \(V_k(t)\), into probabilities of events involving W(t) and \(B^{(j)}(t)\). The probabilities on the right are squared, reflecting the fact that we have two random walks (one at time 0 and another at time t) that must both stay positive. Apart from the first period, which is important to retain separately, only the even periods are included, since they are the ones on which the two random walks are mirrored.

Proposition 3

For any \(k,n\in \mathbb {N}\) with \(n\ge 2k\) and any \(t\in [0,1]\),

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big )&\le \mathbb {P}\big (A_1'(t)\cap E_n(t)\big )\cdot \prod _{j=1}^{\lfloor k/2\rfloor } \mathbb {P}\big (B^{(2j)}_i(t)\\&\quad > 0 \,\,\,\, \forall i\in [1,J_{2j}(t)] \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big )^2. \end{aligned}$$

The proof of this result involves carefully separating out as much independence as possible between the different periods and applying the FKG inequality. Again we postpone the proof to later in the section in order to continue with our overarching proof of Proposition 1.

Next observe that since \(B^{(j)}(t)\) is simply an independent random walk started from \(W_{I_{j-1}(t)-1}(t)\), it has the same distribution as W itself over the \((j+1)\)th period. This inspires our next proposition, which allows us to telescope the product from Proposition 3 back into a statement only about W.

Proposition 4

For any \(k,n\in \mathbb {N}\) with \(n\ge 2k\) and any \(t\in [0,1]\),

$$\begin{aligned}&\prod _{j=1}^k \mathbb {P}\big (B^{(2j)}_i(t) > 0 \,\,\,\, \forall i\in [1,J_{2j}(t)] \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big ) \\&\quad = \frac{\mathbb {P}\big (\bigcap _{j=1}^{k+1} A'_{2j-1}(t) \cap E_n(t)\big )}{\mathbb {P}(A'_1(t)\cap E_n(t))}. \end{aligned}$$

Combining Propositions 3 and 4, and then using elementary bounds, allows us to prove the following.

Proposition 5

Suppose that \(t\in [0,1]\) and \(n\in \mathbb {N}\). Then for any \(k\ge nt/4\), we have

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big ) \lesssim \frac{1}{nt^{1/2}}. \end{aligned}$$

Leaving the proof of Proposition 5 until later, we now observe that

$$\begin{aligned}&\mathbb {P}\big (P_n(0)\cap P_n(t)\big ) \\&\quad = \mathbb {P}\big (P_n(0)\cap P_n(t)\cap E_n(t)\cap E'_n(t)\big ) + \mathbb {P}\big (P_n(0)\cap P_n(t)\cap (E_n(t)^c\cup E'_n(t)^c)\big )\\&\quad \le \mathbb {P}\big (V_{2\lfloor nt/8\rfloor + 1}(t)\cap E_n(t)\big ) + \mathbb {P}\big (P_n(0)\cap (E_n(t)^c\cup E'_n(t)^c)\big )\\&\quad = \mathbb {P}\big (V_{2\lfloor nt/8\rfloor + 1}(t)\cap E_n(t)\big ) + \mathbb {P}\big (P_n(0)\big )\mathbb {P}\big (E_n(t)^c\cup E'_n(t)^c\big ) \end{aligned}$$

where the last equality used the independence of Z(0) and the lengths of the periods at time t. By Proposition 5, the first term on the last line above is at most a constant times \(1/(nt^{1/2})\), and by Corollary 1 and Lemma 9, the second term is at most a constant times \(n^{-1/2}\exp (-\delta nt)\) for some constant \(\delta >0\). Thus

$$\begin{aligned} \mathbb {P}\big (P_n(0)\cap P_n(t)\big ) \lesssim \frac{1}{nt^{1/2}}+\frac{1}{n^{1/2}}\exp (-\delta nt) \end{aligned}$$

and so

$$\begin{aligned} \int _0^1 \frac{\mathbb {P}(P_n(0) \cap P_n(t))}{t^\gamma } \mathop {}\mathrm {d}t \lesssim \frac{1}{n} \int _0^1 t^{-1/2-\gamma }\mathop {}\mathrm {d}t + \frac{1}{n^{1/2}} \int _0^1 t^{-\gamma }e^{-\delta nt} \mathop {}\mathrm {d}t. \end{aligned}$$

For \(\gamma <1/2\), the first integral on the right-hand side above is finite and the second integral (which can be approximated by integrating separately over (0, 1/n] and (1/n, 1)) is of order \(n^{\gamma -1}\). Therefore, for \(\gamma <1/2\),

$$\begin{aligned} \int _0^1 \frac{\mathbb {P}(P_n(0) \cap P_n(t))}{t^\gamma } \mathop {}\mathrm {d}t \lesssim n^{-1} + n^{\gamma -3/2} \asymp n^{-1}. \end{aligned}$$

Recalling from the start of the section that

$$\begin{aligned} \mathbb {E}[ \Phi ^\alpha _n(\gamma )] \le \frac{2}{\mathbb {P}(P_n^\alpha )^2} \int _0^1 \frac{\mathbb {P}(P_n(0) \cap P_n(t))}{t^\gamma } \mathop {}\mathrm {d}t, \end{aligned}$$

and from Lemma 8 that for any \(\alpha <1/2\),

$$\begin{aligned} \mathbb {P}(P_n^\alpha )\asymp \frac{1}{\sqrt{n}}, \end{aligned}$$

we have for \(\alpha ,\gamma <1/2\) that

$$\begin{aligned} \mathbb {E}[ \Phi ^\alpha _n(\gamma )] \lesssim 1. \end{aligned}$$

This completes the proof of Proposition 1, subject to proving all of the intermediary results above.

Before we begin to prove these results, we will need another elementary lemma as an ingredient in the proof of Proposition 3.

Lemma 10

If \((S_i,\, i\ge 0)\) is a simple symmetric random walk, then for any \(x,y,k\in \mathbb {N}\),

$$\begin{aligned} \mathbb {P}_x(S_i \in (0,2y) \,\,\,\, \forall i\le k) \le \mathbb {P}_y(S_i \in (0,2y)\,\,\,\,\forall i\le k). \end{aligned}$$

This is easily proved by induction. We include a proof later, but now proceed with the much more interesting proofs of Propositions 3 and 4. These proofs contain the main ideas of our paper.

Proof of Proposition 3

Our first step is to move from \(A_j(t)\) to \(A'_j(t)\). To do so, we go via a third collection of events which we call \(\tilde{A}_j(t)\). When j is odd, let \(\tilde{A}_j(t) = A'_j(t)\). We have already mentioned that if j is odd, then

$$\begin{aligned} A_j(t) \subset A'_j(t) = \tilde{A}_j(t). \end{aligned}$$

When j is even, define the event

$$\begin{aligned} \tilde{A}_j(t) = \{Z_i(0)\in (0,2W_{I_{j-1}(t)-1}(t)) \,\,\,\, \forall i\in [I_{j-1}(t), I_j(t)-1]\}. \end{aligned}$$

We claim that when j is even, we also have \(A_j(t)\subset \tilde{A}_j(t)\). Indeed, suppose that j is even. We show that if \(\omega \not \in \tilde{A}_j(t)\) then \(\omega \not \in A_j(t)\). If \(\omega \not \in \tilde{A}_j(t)\) then there exists \(i\in [I_{j-1}(t),I_j(t)-1]\) such that either \(Z_i(0)\le 0\), in which case clearly \(\omega \not \in A_j(t)\), or

$$\begin{aligned} Z_i(0)\ge 2W_{I_{j-1}(t)-1}(t) = Z_{I_{j-1}(t)-1}(0) + Z_{I_{j-1}(t)-1}(t). \end{aligned}$$

Then

$$\begin{aligned} Z_i(0)-Z_{I_{j-1}(t)-1}(0) \ge Z_{I_{j-1}(t)-1}(t), \end{aligned}$$

so since the increments of \(Z_i(t)\) are the negative of the increments of \(Z_i(0)\) during even periods,

$$\begin{aligned} Z_i(t)-Z_{I_{j-1}(t)-1}(t) \le - Z_{I_{j-1}(t)-1}(t) \end{aligned}$$

and therefore \(Z_i(t)\le 0\). Thus \(\omega \not \in A_j(t)\), establishing our claim. We deduce that, for any \(k\in \mathbb {N}\),

$$\begin{aligned} A_1(t)\cap A_2(t)\cap \ldots \cap A_k(t) \subset \tilde{A}_1(t)\cap \tilde{A}_2(t)\cap \ldots \cap \tilde{A}_k(t). \end{aligned}$$
(10)

Note that the increments of \(Z_i(0)\) on even periods are independent of the whole process \(W_i(t)\). Combining this fact with Lemma 10, we have

$$\begin{aligned} \mathbb {P}\big (\tilde{A}_1(t)\cap \tilde{A}_2(t)\cap \ldots \cap \tilde{A}_k(t) \big | \mathcal {F}_{I(t)}\big ) \le \mathbb {P}\big (A'_1(t)\cap A'_2(t)\cap \ldots \cap A'_k(t) \big | \mathcal {F}_{I(t)}\big )\nonumber \\ \end{aligned}$$
(11)

for any \(k\in \mathbb {N}\), where \(\mathcal {F}_{I(t)} = \sigma (I_j(t),j\ge 0)\). Combining (10) and (11) and taking expectations to remove the conditioning, for any \(k\in \mathbb {N}\) we have

$$\begin{aligned} \mathbb {P}(V_k(t) \cap E_n(t))\le \mathbb {P}(V'_k(t)\cap E_n(t)). \end{aligned}$$

Applying Bayes’ formula and then ignoring the odd terms for \(j\ge 3\), we have

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big )&\le \mathbb {P}\big (A_1'(t)\cap E_n(t)\big )\cdot \prod _{j=2}^k \mathbb {P}\big (A'_j(t) \,\big |\, V'_{j-1}(t) \cap E_n(t)\big )\nonumber \\&\le \mathbb {P}\big (A_1'(t)\cap E_n(t)\big )\cdot \prod _{j=1}^{\lfloor k/2\rfloor } \mathbb {P}\big (A'_{2j}(t) \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big ). \end{aligned}$$
(12)

We now apply the FKG inequality (2). Recalling that

$$\begin{aligned} A'_{2j}(t)&= \{W_{I_{2j-1}(t)-1}(t) + B^{(2j)}_i(t) \in (0,2W_{I_{2j-1}(t)-1}(t)) \,\,\,\, \forall i\in [1,J_{2j}(t)]\}\\&= \{W_{I_{2j-1}(t)-1}(t) + B^{(2j)}_i(t) > 0 \,\,\,\, \forall i\in [1,J_{2j}(t)]\}\\&\quad \cap \{W_{I_{2j-1}(t)-1}(t) + B^{(2j)}_i(t) < 2W_{I_{2j-1}(t)-1}(t) \,\,\,\, \forall i\in [1,J_{2j}(t)]\}, \end{aligned}$$

and noting that the two events above are increasing and decreasing respectively, we get that

$$\begin{aligned}&\mathbb {P}\big (A'_{2j}(t) \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big ) \\&\quad \le \mathbb {P}\big (B^{(2j)}_i(t)> 0 \,\,\,\, \forall i\in [1,J_{2j}(t)] \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big )\\&\quad \quad \cdot \mathbb {P}\big (B^{(2j)}_i(t) < 2W_{I_{2j-1}(t)-1}(t) \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big )\\&\quad = \mathbb {P}\big (B^{(2j)}_i(t) > 0 \,\,\,\, \forall i\in [1,J_{2j}(t)] \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big )^2, \end{aligned}$$

where the inequality comes from (2) and the equality follows from symmetry about \(W_{I_{2j-1}(t)-1}(t)\) (recalling that \(B^{(2j)}_0(t) = W_{I_{2j-1}(t)-1}(t)\)). Substituting this into (12), we have shown that

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big )&\le \mathbb {P}\big (A_1'(t)\cap E_n(t)\big )\cdot \prod _{j=1}^{\lfloor k/2\rfloor } \mathbb {P}\big (B^{(2j)}_i(t)\\&\quad > 0 \,\,\,\, \forall i\in [1,J_{2j}(t)] \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big )^2 \end{aligned}$$

as required. \(\square \)

Proof of Proposition 4

We work by induction on k. For \(k=1\), we have

$$\begin{aligned}&\mathbb {P}\big (B^{(2)}_i(t)>0 \,\,\,\,\forall i\in [1,J_2(t)] \,\big |\, V'_1(t)\cap E_n(t)\big ) \\&\quad =\frac{\mathbb {P}\big (\{B^{(2)}_i(t)>0 \,\,\,\,\forall i\in [1,J_2(t)]\} \cap A'_1(t)\cap E_n(t)\big )}{\mathbb {P}\big (A'_1(t)\cap E_n(t)\big )}. \end{aligned}$$

On the event \(A'_1(t)\cap E_n(t)\), the law of \((B^{(2)}_i(t))_{i\in [1,J_2(t)]}\) is identical to that of \((W_{I_2(t)-1+i}(t))_{i\in [1,J_3(t)]}\), and therefore

$$\begin{aligned}&\mathbb {P}\big (B^{(2)}_i(t)>0 \,\,\,\,\forall i\in [1,J_2(t)] \,\big |\, V'_1(t)\cap E_n(t)\big )\\&\quad = \frac{\mathbb {P}\big (A'_3(t) \cap A'_1(t)\cap E_n(t)\big )}{\mathbb {P}\big (A'_1(t)\cap E_n(t)\big )}, \end{aligned}$$

establishing the claim in the case \(k=1\). The general case is very similar: assuming that the claim holds for \(k-1\), we have

$$\begin{aligned}&\prod _{j=1}^k \mathbb {P}\big (B^{(2j)}_i(t)> 0 \,\,\,\, \forall i\in [1,J_{2j}(t)] \,\big |\, V'_{2j-1}(t) \cap E_n(t)\big )\\&\quad = \frac{\mathbb {P}\big (\bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\big )}{\mathbb {P}\big (A'_1(t)\cap E_n(t)\big )}\mathbb {P}\big (B^{(2k)}_i(t)>0 \,\,\,\forall i\in [1,J_{2k}(t)] \,\big |\, V'_{2k-1}(t)\cap E_n(t)\big ). \end{aligned}$$

Considering the last term on the right-hand side above, we note that \(B^{(2k)}(t)\) is independent of \(A'_{2j}(t)\) given \(A'_{2j-1}(t)\) for all \(j<k\), and therefore the above equals

$$\begin{aligned}&\frac{\mathbb {P}\big (\bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\big )}{\mathbb {P}\big (A'_1(t)\cap E_n(t)\big )}\mathbb {P}\bigg (B^{(2k)}_i(t)>0 \,\,\,\,\forall i\in [1,J_{2k}(t)] \,\bigg |\, \bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\bigg )\\&\quad =\frac{\mathbb {P}\big (\{B^{(2k)}_i(t)>0 \,\,\,\,\forall i\in [1,J_{2k}(t)]\}\cap \bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\big )}{\mathbb {P}\big (A'_1(t)\cap E_n(t)\big )}. \end{aligned}$$

Provided that \(2k\le n\), on the event \(\bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\), the law of \((B^{(2k)}_i(t))_{i\in [1,J_{2k}(t)]}\) is identical to that of \((W_{I_{2k}(t)-1+i}(t))_{i\in [1,J_{2k+1}(t)]}\), and therefore

$$\begin{aligned}&\mathbb {P}\bigg (\Big \{B^{(2k)}_i(t)>0 \,\,\,\,\forall i\in [1,J_{2k}(t)]\Big \}\cap \bigcap _{j=1}^k A'_{2j-1}(t)\cap E_n(t)\bigg ) \\&\quad = \mathbb {P}\bigg (\bigcap _{j=1}^{k+1} A'_{2j-1}(t)\cap E_n(t)\bigg ) \end{aligned}$$

which establishes the claim for k, completing the proof. \(\square \)

The proof of our third proposition in this section, Proposition 5, does not contain any major ideas; it simply combines the results above with some elementary approximations.

Proof of Proposition 5

Combining Propositions 3 and 4, we have

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big ) \le \frac{\mathbb {P}\big (\bigcap _{j=1}^{\lfloor k/2\rfloor +1} A'_{2j-1}(t) \cap E_n(t)\big )^2}{\mathbb {P}(A'_1(t)\cap E_n(t))}. \end{aligned}$$

Recalling that \(A'_{2j-1}(t)\) requires that \(W_i(t)\) is positive on the \((2j-1)\)th period, whereas \(W_i(t)\) is constant on even periods, we note that

$$\begin{aligned} \bigcap _{j=1}^{\lfloor k/2\rfloor +1} A'_{2j-1}(t) = \{W_i(t)>0 \,\,\,\, \forall i\le I_{2\lfloor k/2\rfloor +1}(t)-1\} \end{aligned}$$

and therefore

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big ) \le \frac{\mathbb {P}\big (\{W_i(t)>0 \,\,\,\, \forall i\le I_{2\lfloor k/2\rfloor +1}(t)-1\}\cap E_n(t)\big )^2}{\mathbb {P}(A'_1(t)\cap E_n(t))}. \end{aligned}$$

Now, \(W_i(t)\) is simply a simple symmetric random walk during odd periods, and constant on even periods. Thus the probability that it stays positive up to step \(I_{2\lfloor k/2\rfloor +1}(t)-1\) is exactly the probability that a simple symmetric random walk stays positive up to step \(J_1(t) + J_3(t) + \cdots + J_{2\lfloor k/2\rfloor +1}(t)-1\). We deduce that

$$\begin{aligned}&\mathbb {P}\big (V_k(t) \cap E_n(t)\big ) \\&\quad \le \frac{\mathbb {P}\big (\{Z_i(t)>0 \,\,\,\, \forall i\le J_1(t) + J_3(t) + \cdots + J_{2\lfloor k/2\rfloor +1}(t)-1\}\cap E_n(t)\big )^2}{\mathbb {P}(A'_1(t)\cap E_n(t))}\\&\quad \le \frac{\mathbb {P}\big (Z_i(t)>0 \,\,\,\, \forall i\le J_1(t) + J_3(t) + \cdots + J_{2\lfloor k/2\rfloor +1}(t)-1 \,\big |\, E_n(t)\big )^2}{\mathbb {P}\big (A'_1(t)\,\big |\, E_n(t)\big )}. \end{aligned}$$

On the event \(E_n(t)\subset E^{\text {odd}}_n(t)\), we have

$$\begin{aligned} J_1(t)+ J_3(t) + \cdots + J_{2\lfloor nt/8\rfloor +1}(t)-1 \ge J_3(t)+ J_5(t) + \cdots + J_{2\lfloor nt/8\rfloor +1}(t)\ge n/8, \end{aligned}$$

and therefore for any \(k\ge nt/4\),

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big ) \le \frac{\mathbb {P}\big (Z_i(t)>0 \,\,\,\, \forall i\le n/8\big )^2}{\mathbb {P}\big (A'_1(t) \,\big |\, E_n(t)\big )} = \frac{\mathbb {P}\big (Z_i(0)>0 \,\,\,\, \forall i\le n/8\big )^2}{\mathbb {P}(A'_1(t))},\nonumber \\ \end{aligned}$$
(13)

where the equality holds by stationarity of Z(t) and the independence of \(A'_1(t)\) and \(E_n(t)\) (since \(E_n(t)\) only involves periods 2 and later). We know from Corollary 1 that

$$\begin{aligned} \mathbb {P}\big (Z_i(0)>0 \,\,\,\, \forall i\le n/8\big ) \asymp n^{-1/2}, \end{aligned}$$

and we claim that

$$\begin{aligned} \mathbb {P}(A'_1(t)) > rsim t^{1/2}. \end{aligned}$$

To see this, note that \(I_1(t)\) is independent of Z(0), so

$$\begin{aligned} \mathbb {P}(A'_1(t))&= \mathbb {P}(Z_i(0)>0\,\,\,\,\forall i=1,\ldots ,I_1(t))\\&\ge \mathbb {P}\Big (I_1(t)\le \Big \lceil \frac{4}{1-e^{-t}}\Big \rceil \Big )\mathbb {P}\Big (Z_i(0)>0 \,\,\,\, \forall i = 1,\ldots ,\Big \lceil \frac{4}{1-e^{-t}}\Big \rceil \Big ). \end{aligned}$$

But by Markov’s inequality

$$\begin{aligned} \mathbb {P}\Big (I_1(t)\le & {} \Big \lceil \frac{4}{1-e^{-t}}\Big \rceil \Big ) = 1- \mathbb {P}\Big (I_1(t)> \Big \lceil \frac{4}{1-e^{-t}}\Big \rceil \Big ) \ge 1-\frac{1-e^{-t}}{4}\mathbb {E}[I_1(t)] \\= & {} 1 - \frac{1}{2} = \frac{1}{2}; \end{aligned}$$

and by Corollary 1,

$$\begin{aligned} \mathbb {P}\Big (Z_i(0)>0 \,\,\,\, \forall i = 1,\ldots ,\Big \lceil \frac{4}{1-e^{-t}}\Big \rceil \Big ) \asymp (1-e^{-t})^{1/2} \asymp t^{1/2}, \end{aligned}$$

which establishes the claim. Substituting our approximations into (13), we have shown that for any \(k\ge nt/4\),

$$\begin{aligned} \mathbb {P}\big (V_k(t) \cap E_n(t)\big ) \lesssim \frac{1}{nt^{1/2}} \end{aligned}$$

as required. \(\square \)

We now proceed with the proofs of our minor lemmas.

Proof of Lemma 8

Recalling that

$$\begin{aligned} P_n = \{Z_i>0 \,\,\,\,\forall i = 1,\ldots ,n\} \,\,\,\,\text { and } \,\,\,\, P^\alpha _n = \big \{Z_i \ge i^\alpha \,\,\,\,\forall i = 1,\ldots ,n\big \}, \end{aligned}$$

we use the fact that \(\mathbb {P}(P_n^\alpha ) = \mathbb {P}(P_n^\alpha | P_n)\mathbb {P}(P_n)\). From Corollary 1 we know that \(\mathbb {P}(P_n)\asymp n^{-1/2}\). It therefore suffices to show that \(\mathbb {P}(P_n^\alpha ) \asymp \mathbb {P}(P_n)\) for any \(\alpha <1/2\). Fix \(\alpha '\in (\alpha ,1/2)\). We apply [19, Theorem 2], which says that we may choose \(\delta >0\) such that

$$\begin{aligned} \mathbb {P}(Z_i\ge \delta i^{\alpha '} \,\,\,\,\forall i=1,\ldots ,n) \ge \mathbb {P}(P_n)/2. \end{aligned}$$

Choose k such that \(\delta i^{\alpha '} \ge i^\alpha \) for all \(i\ge k\). Then

$$\begin{aligned} \mathbb {P}(Z_i \ge i^\alpha \,\,\,\, \forall i=1,\ldots ,n)&\ge \mathbb {P}(Z_i = i\,\,\,\, \forall i=1,\ldots ,k; \, Z_i \ge i^\alpha \,\,\,\, \forall i=k+1,\ldots ,n)\\&\ge \mathbb {P}(Z_i = i\,\,\,\, \forall i=1,\ldots ,k; \, Z_i \ge \delta i^{\alpha '}\,\,\,\, \forall i = k+1,\ldots ,n)\\&= 2^{-k} \mathbb {P}(Z_i \ge \delta (i+k)^{\alpha '}-k \,\,\,\,\forall i = 1,\ldots ,n-k)\\&\ge 2^{-k}\mathbb {P}(Z_i \ge \delta i^{\alpha '}\,\,\,\,\forall i = 1,\ldots ,n) \ge 2^{-(k+1)}\mathbb {P}(P_n), \end{aligned}$$

which completes the proof. \(\square \)

Proof of Lemma 9

We begin by considering \(E^{\text {odd}}_n(t)\). In order for \(E^{\text {odd}}_n(t)^c\) to occur, the sum of \(\lfloor nt/8\rfloor \) independent geometric random variables of parameter \((1-e^{-t})/2\) must be smaller than n/8; which is equivalent to a Binomial random variable of parameters \((\lceil n/8\rceil , (1-e^{-t})/2)\) being larger than \(\lfloor nt/8\rfloor \). Letting Y be such a random variable, we have

$$\begin{aligned} \mathbb {E}[e^{(\log 2)Y}]= & {} \Big ((1+e^{-t})/2 + (1-e^{-t})\Big )^{\lceil n/8\rceil }\\= & {} \Big (1+(1-e^{-t})/2\Big )^{\lceil n/8\rceil } \le (1+t/2)^{\lceil n/8\rceil } \le e^{(n/8+1)t/2}, \end{aligned}$$

so

$$\begin{aligned} \mathbb {P}(Y\ge & {} \lfloor nt/8\rfloor ) \le \mathbb {E}[e^{(\log 2)Y}]e^{-(\log 2)\lfloor nt/8\rfloor } \\\le & {} e^{(n/8+1)t/2 - (\log 2)(nt/8-1)} \le 2e^{1/2}e^{-(2\log 2 - 1)nt/16}. \end{aligned}$$

This proves the required decay for \(\mathbb {P}(E^{\text {odd}}_n(t)^c)\), and \(\mathbb {P}(E^{\text {even}}_n(t))=\mathbb {P}(E^{\text {odd}}_n(t))\). The proof for \(\mathbb {P}(E'_n(t)^c)\) uses a very similar Chernoff bound, noting that \(I_j(t)\) is a sum of j independent Geometric random variables of parameter \((1-e^{-t})/2\). \(\square \)

Proof of Lemma 10

Fix \(y\in \mathbb {N}\) and let

$$\begin{aligned} p_{x,k} = \mathbb {P}_x(S_i \in (0,2y) \,\,\,\, \forall i\le k). \end{aligned}$$

We claim, by induction on k, that \(p_{x,k}\) is non-decreasing in x for \(x\le y\). By symmetry this is enough to prove the lemma. Clearly the claim holds for \(k=0\). For general k, if \(x=y\) then by symmetry

$$\begin{aligned} p_{y,k+1} = \frac{1}{2} p_{y-1,k} + \frac{1}{2} p_{y+1,k} = p_{y-1,k} \end{aligned}$$

which is larger than \(p_{y-1,k+1}\) by definition. On the other hand if \(x<y\), then by the induction hypothesis,

$$\begin{aligned} p_{x,k+1} = \frac{1}{2} p_{x-1,k} + \frac{1}{2} p_{x+1,k} \ge \frac{1}{2} p_{x-2,k} + \frac{1}{2} p_{x,k} = p_{x-1,k+1}. \end{aligned}$$

This completes the proof of our final lemma in this section, and therefore the proof of Proposition 1. \(\square \)

7 Proof of Proposition 2: influences of \(P_n\)

In this section we give estimates on the influence of each bit \(m=1,2,\ldots ,n\) on the event \(P_n\). Proposition 2 stated that for \(m=1,\ldots ,n\),

$$\begin{aligned} \mathcal I_m(P_n) \asymp \frac{n-m+1}{n^{3/2}}, \end{aligned}$$

where \(\mathcal I_m(P_n)\) is the probability that the mth bit is pivotal for \(P_n\), and it will be our aim to prove this. We will keep n fixed and say “m is pivotal” as shorthand for “m is pivotal for \(P_n\)”.

7.1 Translating \(\mathcal I_m(P_n)\) into elementary properties of the random walk

To reduce the amount of work we will take advantage of the fact that

$$\begin{aligned} \mathcal I_m(P_n) = \mathbb {P}(m \text { is pivotal}) = 2\mathbb {P}(\{m \text { is pivotal}\}\cap P_n), \end{aligned}$$
(14)

which holds since the event that m is pivotal is independent of the value of \(X_m\):

$$\begin{aligned}&\mathbb {P}(\{m \text { is pivotal}\}\cap P_n)\\&\quad = \mathbb {P}(\{m \text { is pivotal}\}\cap \{X_m=1\}\cap P_n) + \mathbb {P}(\{m \text { is pivotal}\}\cap \{X_m=-1\}\cap P_n)\\&\quad = \mathbb {P}(\{m \text { is pivotal}\}\cap \{X_m=-1\}\cap P_n^c) + \mathbb {P}(\{m \text { is pivotal}\}\cap \{X_m=1\}\cap P_n^c)\\&\quad =\mathbb {P}(\{m \text { is pivotal}\}\cap P_n^c). \end{aligned}$$

We now write down an explicit condition for the event \(\{m \text { is pivotal}\}\cap P_n\) to occur. We claim that for \(m=1,2,\ldots ,n\),

$$\begin{aligned} \{m \text { is pivotal}\}\cap P_n = \{Z_i>0 \,\,\,\,\forall i=1,\ldots ,n\}\cap \big \{\max _{m\le i\le n} Z_i\ge 2Z_{m-1}\big \}. \end{aligned}$$
(15)

In words, m is pivotal and \(P_n\) holds if and only if Z stays positive for the first n steps, and hits \(2Z_{m-1}\) between steps m and n.

To see why this is true, call the path of Z up to step \(m-1\) the first portion of the walk, and the path from step m to step n the second portion. Of course \(P_n\) entails that both portions remain positive. In order for m to be pivotal, we also need that when we change the sign of the mth bit, and therefore reflect the second portion of the path about \(Z_{m-1}\), the second portion no longer remains positive. This holds if and only if the second portion (before reflection) hits \(2Z_{m-1}\). See Fig. 3.

Fig. 3
figure 3

A realisation of Z with and without the mth bit flipped (dashed red/solid blue). The black dots show the points at which the walks hit one of the two barriers at 0 or \(2Z_{m-1}\), which is the key to pivotality (color figure online)

If \(m=1\) then trivially \(Z_{m-1}=0\), so (15) reduces to

$$\begin{aligned} \{1 \text { is pivotal}\}\cap P_n = \{Z_i>0 \,\,\,\,\forall i=1,\ldots ,n\}. \end{aligned}$$

Thus, by Corollary 1, \(\mathbb {P}(\{1 \text { is pivotal}\}\cap P_n)\) is of order \(n^{-1/2}\). Proposition 2 therefore holds for \(m=1\) and we may assume that from now on \(m\ge 2\).

Returning to (15) in the case \(m\ge 2\), the next step is to split the event that m is pivotal over the possible values of \(Z_{m-1}\). Writing \(\mathbb {P}_z\) for the probability measure under which our walk starts from z instead of 0, by (14) and (15)

$$\begin{aligned}&\mathcal {I}_m(P_n) = 2\sum _{z=1}^{m-1} \mathbb {P}_0\Big (\min _{1\le i\le m-1} Z_i> 0, \, Z_{m-1} = z\Big )\\&\cdot \mathbb {P}_z\Big (\big \{\min _{i \le n-m+1} Z_i > 0\big \}\cap \big \{\max _{i\le m-n+1} Z_i\ge 2z\big \}\Big ). \end{aligned}$$

By the ballot theorem [3] (or see [1] for a thorough introduction), the probability that a simple symmetric random walk starting from 0 stays positive up to step \(m-1\) and finishes at z is \(z/(m-1)\) times the probability that the random walk finishes at z; thus

$$\begin{aligned}&\mathcal {I}_m(P_n) = 2\sum _{z=1}^{m-1} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1} = z) \nonumber \\&\cdot \mathbb {P}_z\Big (\big \{\min _{i \le n-m+1} Z_i > 0\big \}\cap \big \{\max _{i\le m-n+1} Z_i\ge 2z\big \}\Big ). \end{aligned}$$
(16)

7.2 A lower bound on the influences of \(P_n\)

Define the events

$$\begin{aligned}&L = L(m,n) = \big \{\min _{i \le n-m+1} Z_i > 0\big \} \,\,\,\,\text { and } \nonumber \\&\,\,\,\, U = U(m,n,z) = \big \{\max _{i\le n-m+1} Z_i\ge 2z\big \}. \end{aligned}$$
(17)

Let

$$\begin{aligned} l(m,n) = \Big \lfloor \frac{\sqrt{n-m+1}}{2} \Big \rfloor \wedge \Big \lfloor \frac{\sqrt{m-1}}{2} \Big \rfloor . \end{aligned}$$

We want to bound \(\mathbb {P}_z(L\cap U)\) from below when \(z\le l(m,n)\). The following corollary of Lemmas 1 and 3 will be useful.

Corollary 2

If \(0\le z \le \sqrt{n-m+1}\) then

$$\begin{aligned} \mathbb {P}_z(L(m,n)) \asymp \frac{z+1}{\sqrt{n-m+1}} \end{aligned}$$

and if \(0\le z \le l(m,n)\) then

$$\begin{aligned} \mathbb {P}_z(U(m,n,z)) \asymp 1. \end{aligned}$$

Proof

From Lemma 3,

$$\begin{aligned} \mathbb {P}_z(L) = \mathbb {P}_z(Z_i>0 \,\,\,\,\forall i\le n-m+1) = \mathbb {P}_0(Z_{n-m+1}\in [-z+1,z]), \end{aligned}$$

and by Lemma 1, this is of order

$$\begin{aligned} \sum _{i=-z+1}^z \frac{1}{\sqrt{n-m+1}}\exp \Big (-\frac{i^2}{2(n-m+1)}\Big ). \end{aligned}$$

The first part of the result now follows from the fact that \(z\le \sqrt{n-m+1}\). The second part is very similar: using Lemmas 3 and 1,

$$\begin{aligned} \mathbb {P}_z(U)&= 1-\mathbb {P}_z(L) = 1-\mathbb {P}_0(Z_{n-m+1}\in [-z+1,z]) \ge \mathbb {P}_0(Z_{n-m+1}\ge z+1)\\&\ge \sum _{y=z+1}^{\lfloor \sqrt{n-m+1}\rfloor }\mathbb {P}_0(Z_{n-m+1} = y) > rsim \sum _{y=z+1}^{\lfloor \sqrt{n-m+1}\rfloor } \frac{1}{\sqrt{n-m+1}} \asymp 1 \end{aligned}$$

and clearly \(\mathbb {P}_z(U)\le 1\) so the proof is complete. \(\square \)

Lemma 11

For \(z\in [0,l(m,n)]\), we have

$$\begin{aligned} \mathbb {P}_z\Big (L(m,n)\cap U(m,n,z)\Big ) > rsim \frac{z}{\sqrt{n-m+1}}. \end{aligned}$$

Proof

We would like to use the FKG inequality. Unfortunately, neither L nor U is either increasing or decreasing as a function of X. However, if we replace the switch random walk Z with the compass random walk Y, setting

$$\begin{aligned} L' = \big \{\min _{i \le n-m+1} Y_i > 0\big \} \,\,\,\,\text { and }\,\,\,\, U' = \big \{\max _{i\le n-m+1} Y_i\ge 2z\big \}, \end{aligned}$$

then \(L'\) and \(U'\) are both increasing. Thus the FKG inequality (1) tells us that

$$\begin{aligned} \mathbb {P}_z(L'\cap U')\ge \mathbb {P}_z(L')\mathbb {P}_z(U') \end{aligned}$$

and since Y and Z have the same distribution,

$$\begin{aligned} \mathbb {P}_z(L\cap U) = \mathbb {P}_z(L'\cap U') \ge \mathbb {P}_z(L')\mathbb {P}_z(U') = \mathbb {P}_z(L)\mathbb {P}_z(U). \end{aligned}$$

The result now follows from Corollary 2. \(\square \)

Substituting the result of Lemma 11 into (16) gives that

$$\begin{aligned} \mathcal I_m(P_n)&\ge 2\sum _{z=1}^{l(m,n)} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1}=z)\\&\quad \cdot \mathbb {P}_z\Big (\big \{\min _{i \le n-m+1} Z_i > 0\big \}\cap \big \{\max _{i\le m-n+1} Z_i\ge 2z\big \}\Big )\\& > rsim \sum _{z=1}^{l(m,n)} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1}=z) \cdot \frac{z}{\sqrt{n-m+1}}. \end{aligned}$$

Applying Lemma 1 again tells us that for \(z\in [1,l(m,n)]\), we have \(\mathbb {P}_0(Z_{m-1}=z)\asymp (m-1)^{-1/2}\); so

$$\begin{aligned} \mathcal I_m(P_n) > rsim \sum _{z=1}^{l(m,n)} \frac{z}{m-1}\cdot \frac{1}{\sqrt{m-1}} \cdot \frac{z}{\sqrt{n-m+1}} \asymp \frac{l(m,n)^3}{(m-1)^{3/2}(n-m+1)^{1/2}}. \end{aligned}$$

If \(m\le n/2\), then the right-hand side above is of order \(n^{-1/2}\), and if \(m>n/2\), it is of order \((n-m+1)/n^{3/2}\). In either case this completes the proof of the lower bound in Proposition 2.

7.3 An upper bound on the influences of \(P_n\)

We will now bound (16) from above. This direction is far more involved as we need to consider the entire sum; for the lower bound we could restrict to just the values of z that gave the biggest contribution. We recall the definitions of L and U from (17). As part of our proof we will have to bound several sums of the following form.

Lemma 12

If \(c\in \mathbb {N}\) and \(r\ge 0\) then

$$\begin{aligned} \sum _{z=0}^\infty (z+1)^r \exp \Big (-\frac{z^2}{c}\Big ) \lesssim c^{(r+1)/2}. \end{aligned}$$

Proof

Letting \(C = \lceil \sqrt{c}\rceil \), we have

$$\begin{aligned} \sum _{z=0}^\infty (z+1)^r \exp \Big (-\frac{z^2}{c}\Big )&= \sum _{k=0}^\infty \sum _{z=kC}^{(k+1)C-1} (z+1)^r \exp \Big (-\frac{z^2}{c}\Big )\\&\le \sum _{k=0}^\infty C((k+1)C)^r \exp \Big (-\frac{k^2 C^2}{c}\Big )\\&\le C^{r+1} \sum _{k=0}^\infty (k+1)^r \exp (-k^2) \asymp C^{r+1}. \end{aligned}$$

\(\square \)

Let \(M=\lfloor (m-1)^{3/4}\rfloor \). We begin our upper bound on (16) by splitting the sum depending on whether z is larger or smaller than M: from (16),

$$\begin{aligned} \mathcal I_m(P_n) = 2 \sum _{z=1}^{M} \frac{z}{m-1} \mathbb {P}_0(Z_{m-1} = z) \mathbb {P}_z(L \cap U) \nonumber \\\\ + 2 \sum _{z=M+1}^{m-1} \frac{z}{m-1} 2\mathbb {P}_0(Z_{m-1} = z) \mathbb {P}_z(L \cap U)\nonumber \\\\\le 2 \sum _{z = 1}^M \frac{z}{m-1} \mathbb {P}_0(Z_{m-1} = z) \mathbb {P}_z(L\cap U) + 2 \sum _{z=M+1}^{m-1} \mathbb {P}_0(Z_{m-1} = z) \mathbb {P}_z(L). \end{aligned}$$
(18)

We label the two sums in (18) by (18 i) and (18 ii).

Addressing the second sum first, we note that \(\mathbb {P}_z(L)\) is increasing in z, so

$$\begin{aligned} (18\,\text {ii}) \le 2\mathbb {P}_{m-1}(L) \sum _{z=M+1}^{m-1} \mathbb {P}_0(Z_{m-1} = z) = 2\mathbb {P}_{m-1}(L)\mathbb {P}_0(Z_{m-1}>M). \end{aligned}$$

By Lemma 2 with \(x=M\), we have

$$\begin{aligned} \mathbb {P}_0(Z_{m-1}>M) \le \exp (-(m-1)^{1/2}/2). \end{aligned}$$

If \(m-1 > (n-m+1)^{1/2}\) then we use the trivial bound \(\mathbb {P}_{m-1}(L)\le 1\), or if \(m-1\le (n-m+1)^{1/2}\) then we apply Corollary 2 to obtain

$$\begin{aligned} \mathbb {P}_{m-1}(L) \asymp \frac{m}{\sqrt{n-m+1}}. \end{aligned}$$

Putting these estimates together, we have shown that

$$\begin{aligned} (18\,\text {ii})\lesssim \Big (\frac{m}{\sqrt{n-m+1}}\wedge 1\Big )\exp (-(m-1)^{1/2}/2). \end{aligned}$$

By considering the two cases \(m<\sqrt{n}\) and \(m\ge \sqrt{n}\) separately, one can check that in either case the above is at most a constant times \((n-m+1)n^{-3/2}\), as required. It thus remains to bound (18 i).

To do this we split it again depending on whether z exceeds \(\lfloor (n-m+1)^{1/2}\rfloor \). If it does not, we bound \(\mathbb {P}_z(L\cap U)\) above by \(\mathbb {P}_z(L)\) and apply Lemma 1 and Corollary 2. Letting \(M' = M\wedge \lfloor (n-m+1)^{1/2}\rfloor \), we obtain

$$\begin{aligned}&\sum _{z=1}^{M'} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1}=z)\mathbb {P}_z(L\cap U) \nonumber \\&\quad \le \sum _{z=1}^{M'} \frac{z}{m-1} \mathbb {P}_0(Z_{m-1}=z)\mathbb {P}_z(L)\nonumber \\&\quad \asymp \sum _{z=1}^{M'} \frac{z}{m-1} \frac{1}{(m-1)^{1/2}}e^{-z^2/(2(m-1))} \frac{z+1}{(n-m+1)^{1/2}}. \end{aligned}$$
(19)

If \(m\le n/2\), then by Lemma 12,

$$\begin{aligned} \sum _{z=1}^{M'} z(z+1) e^{-\frac{z^2}{2(m-1)}} \lesssim (m-1)^{3/2}, \end{aligned}$$

whereas if \(m>n/2\), then

$$\begin{aligned} \sum _{z=1}^{M'} z(z+1) e^{-\frac{z^2}{2(m-1)}} \le \sum _{z=1}^{\lfloor (n-m+1)^{1/2}\rfloor } z(z+1) \asymp (n-m+1)^{3/2}. \end{aligned}$$

Applying these two bounds to (19) gives that

$$\begin{aligned} \sum _{z=1}^{M'} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1}=z)\mathbb {P}_z(L\cap U)\lesssim & {} \frac{\big ((m-1)^{3/2}\wedge (n-m+1)^{3/2}\big )}{(m-1)^{3/2}(n-m+1)^{1/2}} \nonumber \\\lesssim & {} \frac{n-m+1}{n^{3/2}}, \end{aligned}$$
(20)

as required.

When \(z>(n-m+1)^{1/2}\) then we bound \(\mathbb {P}_z(L\cap U)\) above by \(\mathbb {P}_z(U)\) instead of \(\mathbb {P}_z(L)\). Applying Lemma 1, we have

$$\begin{aligned}&\sum _{z=M'+1}^{M} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1}=z)\mathbb {P}_z(L\cap U)\\&\quad \lesssim \sum _{z=M'+1}^M \frac{z}{m-1} \frac{1}{(m-1)^{1/2}}e^{-z^2/(2(m-1))} \mathbb {P}_z(U), \end{aligned}$$

and by Lemmas 3 and 2,

$$\begin{aligned} \mathbb {P}_z(U)&= 1-\mathbb {P}_z(Z_i<2z \,\,\,\,\forall i\le n-m+1)\\&= 1 - \mathbb {P}(Z_{n-m+1}\in [-z+1,z]) \le 2\mathbb {P}(Z_{n-m+1}\ge z) \\&\le 2\exp \Big (-\frac{z^2}{2(n-m+1)}\Big ). \end{aligned}$$

Thus

$$\begin{aligned}&\sum _{z=M'+1}^{M} \frac{z}{m-1}\mathbb {P}_0(Z_{m-1}=z)\mathbb {P}_z(L\cap U) \nonumber \\&\quad \le \sum _{z=M'+1}^M \frac{2z}{(m-1)^{3/2}}e^{-z^2/(2(m-1)) - z^2/(2(n-m+1))}. \end{aligned}$$
(21)

If \(m>n/2\), then the above is at most

$$\begin{aligned} \sum _{z=0}^\infty \frac{2z}{(m-1)^{3/2}}e^{- z^2/(2(n-m+1))} \end{aligned}$$

and by Lemma 12, this is of order at most \((n-m+1)/n^{3/2}\). On the other hand, if \(m<n/2\) and \(M'\le M\), then

$$\begin{aligned} 21&\le \sum _{z=\lfloor (n-m+1)^{1/2}\rfloor }^\infty \frac{2z}{(m-1)^{3/2}}\exp \Big (-\frac{z^2}{2(m-1)}\Big )\\&\lesssim \frac{1}{(m-1)^{3/2}}\exp \Big (-\frac{n-m+1}{2(m-1)}\Big ) \sum _{z=0}^\infty z\exp \Big (-\frac{z^2}{2(m-1)}\Big ) \end{aligned}$$

and by Lemma 12, this is of order at most

$$\begin{aligned} \frac{1}{(m-1)^{1/2}}\exp \Big (-\frac{n-m+1}{2(m-1)}\Big ). \end{aligned}$$

Since \(e^{-x/2} \le x^{-1/2}\) for all \(x>0\), this is bounded above by \((n-m+1)^{-1/2}\). Thus we have shown that when \(M'\le M\),

$$\begin{aligned} 21\lesssim \frac{n-m+1}{n^{3/2}} \wedge \frac{1}{(n-m+1)^{1/2}} \le \frac{n-m+1}{n^{3/2}}, \end{aligned}$$

and of course when \(M'>M\) the sum is empty and (21)\(=0\). Combining this with (20), we have shown that

$$\begin{aligned} (18\,\text {i}) \lesssim \frac{n-m+1}{n^{3/2}}, \end{aligned}$$

which completes the proof of Proposition 2.

8 Proofs of Lemmas 4, 5 and 6

To complete our proof of the lower bound on the Hausdorff dimension of \(\mathcal {E}\) outlined in Sect. 5, we need several technical lemmas. In this section we prove those results, beginning with Lemma 4, which is based on [20, Lemma 6.2].

Proof of Lemma 4

If we let \(\mu _n^\alpha \) be the measure on [0, 1] given by

$$\begin{aligned} \mu _n^\alpha (A) = \frac{1}{\mathbb {P}(P_n^\alpha )} \int _A \mathbb {1}_{P_n^\alpha (t)}\mathop {}\mathrm {d}t, \end{aligned}$$

then noting that \(\mu _n^\alpha \) is supported on \(\bar{T}_n^\alpha \), [20, Lemma 6.2] gives a sufficient condition for the Hausdorff dimension of \(\bigcap _n \bar{T}_n^\alpha \) to be at least \(\gamma \). This condition is that there exists a finite constant c such that for infinitely many n,

$$\begin{aligned} \mu _n^\alpha ([0,1]) \ge 1/c \,\,\,\, \text { and } \,\,\,\, \int _0^1 \int _0^1 |t-s|^{-\gamma } \mathop {}\mathrm {d}\mu _n^\alpha (s) \mathop {}\mathrm {d}\mu _n^\alpha (t) \le c. \end{aligned}$$

In order to prove our lemma it therefore suffices to show that this condition holds with positive probability for \(\alpha <1/2\).

We start by bounding \(\mu _n^\alpha ([0,1])\) from below. By the Paley-Zygmund inequality,

$$\begin{aligned} \mathbb {P}\Big (\mu _n^\alpha ([0,1]) \ge \frac{1}{2}\mathbb {E}[\mu _n^\alpha ([0,1])]\Big ) \ge \frac{\mathbb {E}[\mu _n^\alpha ([0,1])]^2}{4\mathbb {E}[\mu _n^\alpha ([0,1])^2]}. \end{aligned}$$
(22)

By Fubini’s theorem and stationarity,

$$\begin{aligned} \mathbb {E}[\mu _n^\alpha ([0,1])] = \frac{1}{\mathbb {P}(P_n^\alpha )} \int _0^1 \mathbb {P}(P_n^\alpha (t))\mathop {}\mathrm {d}t = \frac{1}{\mathbb {P}(P_n^\alpha )} \int _0^1 \mathbb {P}(P_n^\alpha ) \mathop {}\mathrm {d}t = 1. \end{aligned}$$

Also, for any \(\gamma \in [0,1)\),

$$\begin{aligned} \mathbb {E}[\mu _n^\alpha ([0,1])^2] = \mathbb {E}\Big [\int _0^1 \int _0^1 \mathbb {1}_{P_n^\alpha (s)}\mathbb {1}_{P_n^\alpha (t)} \mathop {}\mathrm {d}s \mathop {}\mathrm {d}t\Big ] = \mathbb {E}[\Phi _n^\alpha (0)] \le \mathbb {E}[\Phi _n^\alpha (\gamma )]. \end{aligned}$$

Substituting these estimates into (22), we have

$$\begin{aligned} \mathbb {P}(\mu _n^\alpha ([0,1])\ge 1/2) \ge \frac{1}{4\mathbb {E}[\Phi _n^\alpha (\gamma )]} \end{aligned}$$

so fixing \(\gamma \) to take the value in the statement of the lemma and letting \(S = \sup _n \mathbb {E}[\Phi _n^\alpha (\gamma )]\), we have

$$\begin{aligned} \inf _n \mathbb {P}(\mu _n^\alpha ([0,1])\ge 1/2) \ge \frac{1}{4S}. \end{aligned}$$

Now note that

$$\begin{aligned} \Phi _n^\alpha (\gamma ) = \int _0^1 \int _0^1 |t-s|^{-\gamma } \mathop {}\mathrm {d}\mu _n^\alpha (s) \mathop {}\mathrm {d}\mu _n^\alpha (t), \end{aligned}$$

so the second part of our desired condition requires us to show that \(\Phi _n^\alpha (\gamma )\le c\) for some constant c and infinitely many n. By Markov’s inequality,

$$\begin{aligned} \sup _n \mathbb {P}(\Phi _n^\alpha (\gamma )> 8S^2) \le \sup _n \frac{\mathbb {E}[\Phi _n^\alpha (\gamma )]}{8S^2} = \frac{1}{8S}, \end{aligned}$$

and therefore

$$\begin{aligned} \inf _n \mathbb {P}(\mu _n^\alpha ([0,1])\ge & {} 1/2 \text { and } \Phi _n^\alpha (\gamma )\\\le & {} 8S^2) \ge \inf _n \mathbb {P}(\mu _n^\alpha ([0,1])\ge 1/2) - \sup _n \mathbb {P}(\Phi _n^\alpha (\gamma )> 8S^2) \ge \frac{1}{8S}. \end{aligned}$$

By Fatou’s lemma we deduce that

$$\begin{aligned} \mathbb {P}(\mu _n^\alpha ([0,1])\ge 1/2 \text { and } \Phi _n^\alpha (\gamma ) \le 8S^2 \text { for infinitely many } n) \ge \frac{1}{8S}, \end{aligned}$$

and the proof is complete. \(\square \)

Our proof of Lemma 5 is based on the equivalent result for percolation by Häggström, Peres and Steif [14, Lemma 3.2].

Proof of Lemma 5

Recall that for each j, \((N_j(t), t\ge 0)\) is a Poisson process of rate 1 that decides when \(X_j\) rerandomises. For \(i\ge 0\), let \(\tau ^{(i)}_j = \inf \{t\ge 0 : N_j(t)=i\}\), the time of the ith rerandomisation of \(X_j\).

Fix i and j. Since each step of the random walk evolves (in time) independently, almost surely at time \(\tau ^{(i)}_j\) the random walk hits both 0 and \(2Z_{j-1}(\tau ^{(i)}_j)\) after step j; thus for large enough n, the random walk hits 0 before step n regardless of the state of step j. The random walk therefore also falls below the line \(i\mapsto i^\alpha \) before step n (for large enough n), regardless of the state of step j. That is, almost surely, \(\tau ^{(i)}_j \not \in \bar{T}_n^\alpha \setminus T_n^\alpha \) for all large n.

However, since the system only changes when one of the \(X_j\) rerandomises, for each \(\alpha \ge 0\) and \(n\in \mathbb {N}\) we have

$$\begin{aligned} \bar{T}_n^\alpha \setminus T_n^\alpha \subset \{\tau ^{(i)}_j : i=0,1,2,\ldots ,\,\, j=1,2,\ldots ,n\}. \end{aligned}$$
(23)

Thus for each N we have

$$\begin{aligned} \bigcap _{n\ge N} (\bar{T}_n^\alpha \setminus T_n^\alpha ) = \emptyset \,\,\,\, \text { almost surely.} \end{aligned}$$

However, since the \(T_n^\alpha \) are nested,

$$\begin{aligned} \Big (\bigcap _{n\ge 1}\bar{T}_n^\alpha \Big ) \setminus \Big (\bigcap _{n\ge 1} T_n^\alpha \Big ) \subset \bigcup _{N\ge 1} \bigcap _{n\ge N} (\bar{T}_n^\alpha \setminus T_n^\alpha ) \end{aligned}$$

so the left-hand side is also empty almost surely, as required. \(\square \)

Finally, Lemma 6 is a standard application of the ergodic theorem.

Proof of Lemma 6

To apply the ergodic theorem (see for example [6, Theorem 24.1] and the surrounding chapter for further details), we should formally construct our probability space. For each \(i\in \{0,1,2,\ldots \}\) and \(j\in \mathbb {N}\) we take a Bernoulli random variable \(B^{(i)}_j\) and an exponential random variable \(E^{(i)}_j\) of parameter 1. We view our space \(\Omega \) as the set of sequences \((((B^{(i)}_j, E^{(i)}_j)_{i\ge 0})_{j\ge 1})\), with the product \(\sigma \)-algebra. We can then define \(X_j(t)\) to take the value \(B^{(i)}_t\) whenever \(\sum _{k<i}E^{(i)}_j \le t < \sum _{k\le i} E^{(i)}_j\). We have the shift map \(\theta : \Omega \rightarrow \Omega \) which maps \((((B^{(i)}_j, E^{(i)}_j)_{i\ge 0})_{j\ge 1})\) to \((((B^{(i)}_j, E^{(i)}_j)_{i\ge 0})_{j\ge 2})\); in practical terms, \(\theta \) deletes \(X_1(t)\) and builds our (dynamical) random walks from \((X_2(t),X_3(t),\ldots )\) instead. Standard methods show that \(\theta \) is ergodic. Define

$$\begin{aligned} \mathcal {E}'_\alpha = \Big \{t\in [0,1] : \liminf _{n\rightarrow \infty } \frac{-Z_n(t)}{n^\alpha } > 0 \Big \}. \end{aligned}$$

For any \(\alpha \ge 0\), the Hausdorff dimension of \(\mathcal {E}_\alpha \cup \mathcal {E}'_\alpha \) is invariant under \(\theta \), and therefore constant almost surely by the ergodic theorem. By symmetry, the Hausdorff dimension of \(\mathcal {E}_\alpha \) equals that of \(\mathcal {E}'_\alpha \). Since the Hausdorff dimension of the union of two sets is the maximum of their Hausdorff dimensions, the Hausdorff dimension of \(\mathcal {E}_\alpha \) must therefore equal that of \(\mathcal {E}_\alpha \cup \mathcal {E}'_\alpha \), and thus be constant almost surely. \(\square \)