Skip to main content
Log in

Limit Theorems for the ‘Laziest’ Minimal Random Walk Model of Elephant Type

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We consider a minimal model of one-dimensional discrete-time random walk with step-reinforcement, introduced by Harbola, Kumar, and Lindenberg (2014): The walker can move forward (never backward), or remain at rest. For each \(n=1,2,\ldots \), a random time \(U_n\) between 1 and n is chosen uniformly, and if the walker moved forward [resp. remained at rest] at time \(U_n\), then at time \(n+1\) it can move forward with probability p [resp. q], or with probability \(1-p\) [resp. \(1-q\)] it remains at its present position. For the case \(q>0\), several limit theorems are obtained by Coletti, Gava, and de Lima (2019). In this paper we prove limit theorems for the case \(q=0\), where the walker can exhibit all three forms of asymptotic behavior as p is varied. As a byproduct, we obtain limit theorems for the cluster size of the root in percolation on uniform random recursive trees.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Baur, E., Bertoin, J.: Elephant random walks and their connection to Pólya-type urns. Phys. Rev. E 94, 052134 (2016)

    Article  ADS  Google Scholar 

  2. Bercu, B.: A martingale approach for the elephant random walk. J. Phys. A 51, 015201 (2018)

    Article  ADS  MathSciNet  Google Scholar 

  3. Bercu, B., Laulin, L.: On the multi-dimensional elephant random walk. J. Stat. Phys. 175, 1146–1163 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  4. Bertoin, J.: Noise reinforcement for Lévy processes, to appear in Ann. Inst. Henri Poincaré Probab. Stat., arXiv:1810.08364 (2018)

  5. Bertoin, J.: Universality of noise reinforced Brownian motions, arXiv:2002.09166 (2020)

  6. Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation, Encyclopedia of Mathematics and Its Applications, vol. 27. Cambridge University Press, Cambridge (1989)

    MATH  Google Scholar 

  7. Businger, S.: The shark random swim (Lévy flight with memory), J. Statist. Phys. 172, 701–717. (See also arXiv:1710.05671v3) (2018)

  8. Coletti, C.F., Gava, R.J., de Lima, L.R.: Limit theorems for a minimal random walk model. J. Stat. Mech. 2019, 083206 (2019)

    Article  MathSciNet  Google Scholar 

  9. Coletti, C.F., Gava, R.J., Schütz, G.M.: Central limit theorem for the elephant random walk. J. Math. Phys. 58, 053303 (2017)

    Article  ADS  MathSciNet  Google Scholar 

  10. Coletti, C.F., Gava, R.J., Schütz, G.M.: A strong invariance principle for the elephant random walk. Stat. Mech. 2017, 123207 (2017)

    Article  MathSciNet  Google Scholar 

  11. Drezner, Z., Farnum, N.: A generalized binomial distribution. Comm. Stat. Theory Methods 22, 3051–3063 (1993)

    Article  MathSciNet  Google Scholar 

  12. Gut, A., Stadtmüller, U.: Variations of the elephant random walk, arXiv:1812.01915 (2018)

  13. Gut, A., Stadtmüller, U.: Elephant random walks with delays, arXiv:1906.04930 (2019)

  14. Häggström, O.: Coloring percolation clusters at random. Stoch. Proc. Appl. 96, 213–242 (2001)

    Article  MathSciNet  Google Scholar 

  15. Hall, P., Heyde, C.C.: Martingale Limit Theory and Its Application. Probability and Mathematical Statistics. Academic Press, New York (1980)

    MATH  Google Scholar 

  16. Harbola, U., Kumar, N., Lindenberg, K.: Memory-induced anomalous dynamics in a minimal random walk model. Phys. Rev. E 90, 022136 (2014)

    Article  ADS  Google Scholar 

  17. Heyde, C.C.: On central limit and iterated logarithm supplements to the martingale convergence theorem. J. Appl. Probab. 14, 758–775 (1977)

    Article  MathSciNet  Google Scholar 

  18. Heyde, C.C.: Asymptotics and criticality for a correlated Bernoulli process. Aust. N. Z. J. Stat. 46, 53–57 (2004)

    Article  MathSciNet  Google Scholar 

  19. Kubota, N., Takei, M.: Gaussian fluctuation for superdiffusive elephant random walks. J. Stat. Phys. 177, 1157–1171 (2019)

    Article  ADS  MathSciNet  Google Scholar 

  20. Kumar, N., Harbola, U., Lindenberg, K.: Memory-induced anomalous dynamics: emergence of diffusion. Phys. Rev. E 82, 021101 (2010)

    Article  ADS  Google Scholar 

  21. Kürsten, R.: Random recursive trees and the elephant random walk. Phys. Rev. E 93, 032111 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  22. Pollard, H.: The completely monotonic character of the Mittag-Leffler function \(E_a(-x)\). Bull. Am. Math. Soc. 54, 1115–1116 (1948)

    Article  Google Scholar 

  23. Schütz, G.M., Trimper, S.: Elephants can always remember: exact long-range memory effects in a non-Markovian random walk. Phys. Rev. E 70, 045101 (2004)

    Article  ADS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masato Takei.

Additional information

Communicated by Satya Majumdar.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

M.T. is partially supported by JSPS Grant-in-Aid for Scientific Research (B) No. 19H01793 and (C) No. 19K03514.

Appendices

Appendix A: The Minimal Random Walk Model and Percolation on Random Recursive Trees

We explore a relation between the minimal random walk model by Harbola, Kumar, and Lindenberg [16], explained in the Introduction, and percolation on random recursive trees. Throughout this section we assume that \(0 \le q< p < 1\), and set \(\alpha =p-q\) and \(\rho = q/(1-\alpha )\).

The sequence \(\{T_i\}\) of random recursive trees is defined in Sect. 2.3. Consider bond percolation on \(T_n\) with parameter \(\alpha \). The expectation regarding this model is denoted by \(E_{\alpha } [ \,\cdot \,]\). There are at most n clusters, which are denoted by \(\mathcal {C}_{1,n},\,\mathcal {C}_{2,n},\,\ldots ,\,\mathcal {C}_{n,n}\) (for convenience we regard \(\mathcal {C}_{j,n}=\emptyset \) if j is larger than the number of clusters). We quote some of results in Kürsten [21].

Lemma 1

For bond percolation on \(T_n\) with parameter \(\alpha \in (0,1)\),

$$\begin{aligned} E_{\alpha } [\# \mathcal {C}_{1,n} ]&= \dfrac{\Gamma (n+\alpha )}{\Gamma (1+\alpha )\Gamma (n)}, \end{aligned}$$
(14)
$$\begin{aligned} E_{\alpha } [(\# \mathcal {C}_{1,n})^2 ]&= \dfrac{2\Gamma (n+2\alpha )}{\Gamma (1+2\alpha )\Gamma (n)} - \dfrac{\Gamma (n+\alpha )}{\Gamma (1+\alpha )\Gamma (n)}, \end{aligned}$$
(15)
$$\begin{aligned} \sum _{j=1}^n E_{\alpha } [(\# \mathcal {C}_{j,n})^2 ]&= {\left\{ \begin{array}{ll} \dfrac{1}{1-2\alpha } \cdot n+ \dfrac{1}{2\alpha -1} \cdot \dfrac{\Gamma (n+2\alpha )}{\Gamma (2\alpha )\Gamma (n)} &{}(\alpha \ne 1/2), \\ \displaystyle n\sum _{\ell =1}^n \dfrac{1}{\ell }&{}(\alpha =1/2). \end{array}\right. } \end{aligned}$$
(16)

Remark 2

In [21], (14) and (16) are derived from basic results on the original elephant random walk, found in [23]. In view of the connection with the laziest minimal random walk model explained in Sect. 2.3, (14) and (15) are paraphrases of (7). Those are obtained by solving relatively easy difference equations.

Let \(\xi _1,\xi _2,\ldots ,\xi _n\) be a sequence of independent random variables, which is also independent from bond percolation, satisfying

$$\begin{aligned}&P_{p,q,s} (\xi _1=1)=1-P_{p,q,s}(\xi _1=0)=s,\quad \text{ and } \\&P_{p,q,s}(\xi _j=1)=1-P_{p,q,s}(\xi _j=0)=\rho \quad \text{ for } j>1. \end{aligned}$$

The transition probability in (1) can be interpreted as follows: For each step \(n=2,3,\ldots \),

  • with probability \(\alpha \), the walker repeats the behavior at a uniformly chosen time, and

  • with probability \(1-\alpha \), the walker moves forward with probability \(\rho \), or remains at rest otherwise.

Similarly to [21], we can see that \(H_n\) has the same distribution as

$$\begin{aligned} \sum _{j=1}^n \xi _j \cdot (\# \mathcal {C}_{j,n}). \end{aligned}$$
(17)

In the case \(q>0\), computation of the second moment (and the variance) of \(H_n\) by solving difference equations is straightforward but quite tedious, as is imagined from very complicated equations (8), (9) and (10) in [16]. Using the above connection with percolation, we can easily obtain concise formulae described in terms of the moments of the size of open clusters.

Theorem 4

Let \(E_{p,q,s}\) and \(V_{p,q,s}\) denote the expectation and the variance for the minimal random walk model. Assume that \(0 \le q< p < 1\), and set \(\alpha =p-q\) and \(\rho = q/(1-\alpha )\). Then we have the following.

$$\begin{aligned} E_{p,q,s} [H_n]&= \rho n + (s-\rho ) E_{\alpha } [\# \mathcal {C}_{1,n} ], \end{aligned}$$
(18)
$$\begin{aligned} V_{p,q,s} [H_n]&= \rho (1-\rho ) \sum _{j=1}^n E_{\alpha }[(\# \mathcal {C}_{j,n})^2] \nonumber \\&\quad + (1-2\rho )(s-\rho ) E_{\alpha } [(\# \mathcal {C}_{1,n})^2 ] - (s-\rho )^2 (E_{\alpha } [\# \mathcal {C}_{1,n}])^2. \end{aligned}$$
(19)

Proof

Since \(E_{p,q,s} [\xi _1]=s\) and \(E_{p,q,s} [\xi _j]=\rho \) for \(j>1\), we have

$$\begin{aligned} E_{p,q,s} [H_n]&= \sum _{j=1}^n E_{p,q,s} [\xi _j] \cdot E_{\alpha } [\# \mathcal {C}_{j,n}] = s E_{\alpha }[\# \mathcal {C}_{1,n}]+\rho \sum _{j=2}^n E_{\alpha }[\# \mathcal {C}_{j,n}] \\&= \rho \sum _{j=1}^n E_{\alpha }[\# \mathcal {C}_{j,n}] + (s-\rho ) E_{\alpha } [\# \mathcal {C}_{1,n}] = \rho n + (s-\rho ) E_{\alpha }[\# \mathcal {C}_{1,n}]. \end{aligned}$$

Turning to the mean square displacement, similarly as eq. (17) in [21],

$$\begin{aligned}&E_{p,q,s}[(H_n)^2]\nonumber \\&\quad = \sum _{j=1}^n E_{p,q,s} [(\xi _j)^2] \cdot E_{\alpha } [(\# \mathcal {C}_{j,n})^2] \nonumber \\&\qquad + 2\sum _{1\le j<k \le n} E_{p,q,s}[\xi _j] \cdot E_{p,q,s}[\xi _k] \cdot E_{\alpha } [(\# \mathcal {C}_{j,n}) \cdot (\# \mathcal {C}_{k,n})] \nonumber \\&\quad = \rho \sum _{j=1}^n E_{\alpha }[(\# \mathcal {C}_{j,n})^2] + 2\rho ^2 \sum _{1 \le j<k \le n} E_{\alpha }[(\# \mathcal {C}_{j,n}) \cdot (\# \mathcal {C}_{k,n})] \nonumber \\&\qquad +(s-\rho ) E_{\alpha }[(\# \mathcal {C}_{1,n})^2]+ 2(s - \rho )\rho \sum _{1<k \le n} E_{\alpha }[(\# \mathcal {C}_{1,n}) \cdot (\# \mathcal {C}_{k,n})]. \end{aligned}$$
(20)

The first two terms in (20) are

$$\begin{aligned}&\rho ^2 E_{\alpha }\left[ \left( \sum _{j=1}^n \# \mathcal {C}_{j,n}\right) ^2 \right] +(\rho -\rho ^2) \sum _{j=1}^n E_{\alpha }[(\# \mathcal {C}_{j,n})^2] \\&\quad =\rho ^2n^2 + \rho (1-\rho ) \sum _{j=1}^n E_{\alpha }[(\# \mathcal {C}_{j,n})^2]. \end{aligned}$$

Noting that

$$\begin{aligned} \sum _{1<k \le n} E_{\alpha }[(\# \mathcal {C}_{1,n}) \cdot (\# \mathcal {C}_{k,n})]&= E_{\alpha }\left[ (\# \mathcal {C}_{1,n})\cdot \sum _{1<k \le n} (\# \mathcal {C}_{k,n}) \right] \\&= E_{\alpha }[(\# \mathcal {C}_{1,n})\cdot (n-\# \mathcal {C}_{1,n} )], \end{aligned}$$

the other two terms in (20) are

$$\begin{aligned}&(s-\rho ) E_{\alpha }[(\# \mathcal {C}_{1,n})^2]+ 2(s-\rho )\rho E_{\alpha }[(\# \mathcal {C}_{1,n})\cdot (n-\# \mathcal {C}_{1,n} )] \\&\quad =(1-2\rho )(s-\rho ) E_{\alpha }[(\# \mathcal {C}_{1,n})^2]+ 2(s-\rho )\rho n E_{\alpha }[\# \mathcal {C}_{1,n}] . \end{aligned}$$

Using

$$\begin{aligned} (E_{p,q,s}[H_n])^2&= \rho ^2 n^2 +2(s-\rho ) \rho n E_{\alpha } [\# \mathcal {C}_{1,n} ]+ (s-\rho )^2 (E_{\alpha } [\# \mathcal {C}_{1,n} ])^2, \end{aligned}$$

we have the conclusion. \(\square \)

Combining (19) with Lemma 1, we can obtain the asymptotics of the variance.

Corollary 2

When \(q>0\),

$$\begin{aligned} V_{p,q,s} [H_n]&\sim {\left\{ \begin{array}{ll} \dfrac{\rho (1-\rho )}{1-2\alpha }n &{}(\alpha <1/2), \\ \rho (1-\rho ) n \log n &{}(\alpha =1/2), \\ \left[ \dfrac{\rho (1-\rho )}{(2\alpha -1)\Gamma (2\alpha )} + \dfrac{(1-2\rho )(s-\rho )}{\Gamma (1+2\alpha )} - \dfrac{(s-\rho )^2}{\Gamma (1+\alpha )^2}\right] n^{2\alpha } &{}(\alpha >1/2) \\ \end{array}\right. } \end{aligned}$$

as \(n \rightarrow \infty \).

To close this section, we give a remark on phase transition of the biased elephant random walk \(\{S_n\}\) on \(\mathbb {Z}\):

  • With probability \(\alpha \), the walker repeats one of previous steps.

  • With probability \(1-\alpha \), the walker performs like a simple random walk, which jumps to the right with probability \(\rho \), or to the left with probability \(1-\rho \) (The unbiased case \(\rho =1/2\) is the original elephant random walk explained in the Introduction, where \(p \ge 1/2\) and \(\alpha =2p-1\).)

This is obtained from the minimal random walk model as follows: Let

$$\begin{aligned} Y_i :=2X_i-1 \quad \text{ and } \quad S_n=\sum _{i=1}^n Y_i = 2H_n-n. \end{aligned}$$

Then \(P(Y_1=+1)=1- P(Y_1=-1)=s\), and by (1),

$$\begin{aligned} P(Y_{n+1}=\pm 1 \mid \mathcal {F}_n) = \alpha \cdot \dfrac{\#\{i=1,\ldots ,n : Y_i=\pm 1\}}{n} + (1-\alpha ) \cdot \rho . \end{aligned}$$

By (3), we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \dfrac{S_n}{n}= 2\rho -1\qquad \text{ a.s.. } \end{aligned}$$
(21)

Consider bond percolation on \(T_n\) with parameter \(\alpha \), and assign ‘spin’ \(m_j:=2\xi _j-1\in \{+1,-1\}\) to each of percolation clusters \(\mathcal {C}_{j,n}\), independently for different clusters. By (17), \(S_n\) has the same distribution as

$$\begin{aligned} \sum _{j=1}^n m_j \cdot (\# \mathcal {C}_{j,n}). \end{aligned}$$

The above procedure is essentially the same as the “Divide and Color” model introduced by Häggström [14]. When \(s=\rho =1/2\), the resulting model resembles the Ising model with zero external field, and increasing \(\alpha \) corresponds to lowering the temperature. The parameter \(\varepsilon :=2\rho -1\) plays a similar role to the external field in the Ising model. By (21), when \(\varepsilon \ne 0\), the asymptotic speed of the walker remains unchanged regardless of the value of \(\alpha \). On the other hand, when \(\varepsilon = 0\), the walker admits a phase transition from diffusive to superdiffusive behavior. This is reminiscent of the fact that the Ising model admits a phase transition only when the external field is zero.

Appendix B: Martingale Limit Theorems

Theorem 5

(Hall and Heyde [15], Theorem 2.15) Suppose that \(\{M_n\}\) is a square-integrable martingale with mean 0. Let \(d_k=M_k-M_{k-1}\) for \(k=1,2,\ldots \), where \(M_0=0\). On the event

$$\begin{aligned} \left\{ \sum _{k=1}^{\infty } E[(d_k)^2 \mid \mathcal {F}_{k-1}] <+\infty \right\} , \end{aligned}$$

\(\{M_n\}\) converges a.s..

Theorem 6

(Heyde [17], Theorem 1 (b)) Suppose that \(\{M_n\}\) is a square-integrable martingale with mean 0. Let \(d_k=M_k-M_{k-1}\) for \(k=1,2,\ldots \), where \(M_0=0\). If

$$\begin{aligned} \displaystyle \sum _{k=1}^{\infty } E[(d_k)^2] < +\infty \end{aligned}$$

holds in addition, then we have the following: Let

$$\begin{aligned} \displaystyle W_n^2 :=\sum _{k=n}^{\infty } (d_k)^2 \quad \text{ and } \quad s_n^2 := \sum _{k=n}^{\infty } E[ (d_k)^2]. \end{aligned}$$
  1. (i)

    The limit \(M_{\infty }:=\sum _{k=1}^{\infty } d_k\) exists a.s., and \(M_n {\mathop {\rightarrow }\limits ^{L^2}} M_{\infty }\).

  2. (ii)

    Assume that

    1. (a)

      \(\displaystyle \dfrac{W_n^2}{s_n^2} \rightarrow \eta ^2\) as \(n \rightarrow \infty \) in probability, and

    2. (b)

      \(\displaystyle \lim _{n \rightarrow \infty } \dfrac{1}{s_n^2}E\left[ \sup _{k \ge n} (d_k)^2\right] =0\),

    where \(\eta ^2\) is some a.s. finite and non-zero random variable. Then we have

    $$\begin{aligned} \dfrac{M_{\infty } - M_n}{W_{n+1}}&= \dfrac{ \sum _{k=n+1}^{\infty } d_k}{W_{n+1}}{\mathop {\rightarrow }\limits ^{d}} Z,\quad {and} \\ \dfrac{M_{\infty } - M_n}{s_{n+1}}&= \dfrac{ \sum _{k=n+1}^{\infty } d_k}{s_{n+1}}{\mathop {\rightarrow }\limits ^{d}} \widehat{\eta } \cdot Z, \end{aligned}$$

    where Z is distributed as N(0, 1), and \(\widehat{\eta }\) is independent of Z and distributed as \(\eta \).

  3. (iii)

    Assume that the following three conditions hold:

    1. (a’)

      \(\displaystyle \dfrac{W_n^2}{s_n^2} \rightarrow \eta ^2\) as \(n \rightarrow \infty \) a.s.,

    2. (c)

      \(\displaystyle \sum _{k=1}^{\infty } \dfrac{1}{s_k} E[ |d_k| : |d_k| > \varepsilon s_k] < +\infty \) for any \(\varepsilon > 0\), and

    3. (d)

      \(\displaystyle \sum _{k=1}^{\infty } \dfrac{1}{s_k^4} E[ (d_k)^4 : |d_k| \le \delta s_k] < +\infty \) for some \(\delta > 0\).

    Then \(\displaystyle \limsup _{n \rightarrow \infty } \pm \dfrac{M_{\infty } - M_n}{\widehat{\phi }(W_{n+1}^2)} =1\) a.s., where \(\widehat{\phi }(t):=\sqrt{2t\log |\log t|}\).

Remark 3

A sufficient condition for (b) in Theorem 6 is that

$$\begin{aligned} \displaystyle \lim _{n \rightarrow \infty }\dfrac{1}{s_n^2} \sum _{k=n}^{\infty } E[ (d_k)^2 : |d_k| > \varepsilon s_n] =0 \end{aligned}$$

for any \(\varepsilon > 0\). (See the proof of Corollary 1 in Heyde [17].)

Appendix C: The Mittag–Leffler Distribution

The Mittag–Leffler function is defined by

$$\begin{aligned} E_{\alpha }(z) := \sum _{k=0}^{\infty } \dfrac{z^k}{\Gamma (k + \alpha )}\qquad (\alpha ,z \in \mathbb {C}). \end{aligned}$$

Note that \(E_1(z)=e^z\) (See e.g. [6], p. 315).

The random variable X is Mittag–Leffler distributed with parameter \(p \in [0,1]\) if

$$\begin{aligned} E[e^{\lambda X}] = E_p (\lambda ) = \sum _{k=0}^{\infty } \dfrac{\lambda ^k}{\Gamma (1+ kp)}\qquad \hbox { for}\ \lambda \in \mathbb {R}. \end{aligned}$$

Thus the k-th moment of X is \( \dfrac{k!}{\Gamma (1+ kp)}\), and this distribution is determined by moments (see [6], p. 329 and p. 391). If \(p=0\) (resp. \(p=1\)), then X has the exponential distribution with mean one (resp. X concentrates on \(\{1\}\)). For \(p \in (0,1)\), the probability density function \(f_p(x)\) of X is

$$\begin{aligned} f_p(x)= \dfrac{\rho _p(x^{-1/p})}{px^{1+1/p}}\qquad \hbox { for}\ x>0, \end{aligned}$$

where \(\rho _p(x)\) is the density function of the one-sided \(\hbox { stable}\ (p)\) distribution. See [22] for details. In particular, \(f_{1/2}\) is the density function of the standard half-normal distribution:

$$\begin{aligned} f_{1/2} (x) = \sqrt{\dfrac{2}{\pi }} e^{-x^2/2}\qquad \hbox { for}\ x>0. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Miyazaki, T., Takei, M. Limit Theorems for the ‘Laziest’ Minimal Random Walk Model of Elephant Type. J Stat Phys 181, 587–602 (2020). https://doi.org/10.1007/s10955-020-02590-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-020-02590-4

Keywords

Navigation