当前位置: X-MOL 学术Probab Theory Relat Fields › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
How linear reinforcement affects Donsker’s theorem for empirical processes
Probability Theory and Related Fields ( IF 1.5 ) Pub Date : 2020-09-18 , DOI: 10.1007/s00440-020-01001-9
Jean Bertoin

A reinforcement algorithm introduced by H.A. Simon \cite{Simon} produces a sequence of uniform random variables with memory as follows. At each step, with a fixed probability $p\in(0,1)$, $\hat U_{n+1}$ is sampled uniformly from $\hat U_1, \ldots, \hat U_n$, and with complementary probability $1-p$, $\hat U_{n+1}$ is a new independent uniform variable. The Glivenko-Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when $p 1/2$ and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.

中文翻译:

线性强化如何影响经验过程的 Donsker 定理

由 HA Simon \cite{Simon} 引入的强化算法产生一系列具有记忆的均匀随机变量,如下所示。在每一步,以固定概率 $p\in(0,1)$,$\hat U_{n+1}$ 从 $\hat U_1, \ldots, \hat U_n$ 中均匀采样,并以互补概率$1-p$, $\hat U_{n+1}$ 是一个新的独立的统一变量。Glivenko-Cantelli 定理对强化经验测量仍然有效,但对 Donsker 定理无效。具体来说,我们表明经验过程的序列在法律上收敛到布朗桥,仅当 $p 1/2$ 时达到一个常数因子,然后极限是具有可交换增量和不连续路径的桥。这与相关伯努利过程的早期极限定理有关,即所谓的大象随机游走,以及更普遍的步进强化随机游走。
更新日期:2020-09-18
down
wechat
bug