当前位置: X-MOL 学术Stat. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scalable estimation strategies based on stochastic approximations: Classical results and new insights.
Statistics and Computing ( IF 1.6 ) Pub Date : 2015-06-11 , DOI: 10.1007/s11222-015-9560-y
Edoardo M Airoldi 1 , Panos Toulis 1
Affiliation  

Estimation with large amounts of data can be facilitated by stochastic gradient methods, in which model parameters are updated sequentially using small batches of data at each step. Here, we review early work and modern results that illustrate the statistical properties of these methods, including convergence rates, stability, and asymptotic bias and variance. We then overview modern applications where these methods are useful, ranging from an online version of the EM algorithm to deep learning. In light of these results, we argue that stochastic gradient methods are poised to become benchmark principled estimation procedures for large datasets, especially those in the family of stable proximal methods, such as implicit stochastic gradient descent.

中文翻译:


基于随机近似的可扩展估计策略:经典结果和新见解。



随机梯度方法可以促进大量数据的估计,其中模型参数在每一步使用小批量数据顺序更新。在这里,我们回顾了早期的工作和现代结果,说明了这些方法的统计特性,包括收敛率、稳定性以及渐近偏差和方差。然后,我们概述了这些方法有用的现代应用,从在线版本的 EM 算法到深度学习。根据这些结果,我们认为随机梯度方法有望成为大型数据集的基准原理估计程序,特别是稳定近端方法系列中的方法,例如隐式随机梯度下降。
更新日期:2015-06-11
down
wechat
bug