当前位置: X-MOL 学术J. Time Ser. Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Estimation and inference in adaptive learning models with slowly decreasing gains
Journal of Time Series Analysis ( IF 1.2 ) Pub Date : 2021-11-21 , DOI: 10.1111/jtsa.12636
Alexander Mayer 1
Affiliation  

An asymptotic theory for estimation and inference in adaptive learning models with strong mixing regressors and martingale difference innovations is developed. The maintained polynomial gain specification provides a unified framework which permits slow convergence of agents' beliefs and contains recursive least squares as a prominent special case. Reminiscent of the classical literature on co-integration, an asymptotic equivalence between two approaches to estimation of long-run equilibrium and short-run dynamics is established. Notwithstanding potential threats to inference arising from non-standard convergence rates and a singular variance–covariance matrix, hypotheses about single, as well as joint restrictions remain testable. Monte Carlo evidence confirms the accuracy of the asymptotic theory in finite samples.

中文翻译:

增益缓慢递减的自适应学习模型中的估计和推理

开发了一种在具有强混合回归量和鞅差创新的自适应学习模型中进行估计和推理的渐近理论。保持的多项式增益规范提供了一个统一的框架,该框架允许代理的信念缓慢收敛,并包含递归最小二乘法作为一个突出的特殊情况。让人想起关于协整的经典文献,建立了估计长期均衡和短期动态的两种方法之间的渐近等价。尽管非标准收敛速度和奇异方差-协方差矩阵对推理造成潜在威胁,但关于单一限制和联合限制的假设仍然可以检验。蒙特卡罗证据证实了渐近理论在有限样本中的准确性。
更新日期:2021-11-21
down
wechat
bug