当前位置: X-MOL 学术IEEE Signal Proc. Mag. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Optimization and Learning With Information Streams: Time-varying algorithms and applications
IEEE Signal Processing Magazine ( IF 14.9 ) Pub Date : 2020-05-01 , DOI: 10.1109/msp.2020.2968813
Emiliano Dall'Anese , Andrea Simonetto , Stephen Becker , Liam Madden

There is a growing cross-disciplinary effort in the broad domain of optimization and learning with streams of data, applied to settings where traditional batch optimization techniques cannot produce solutions at time scales that match the interarrival times of the data points due to computational and/or communication bottlenecks. Special types of online algorithms can handle this situation, and this article focuses on such time-varying optimization algorithms, with emphasis on machine leaning (ML) and signal processing (SP) as well as data-driven control (DDC). Approaches for the design of time-varying or online first-order optimization methods are discussed, with emphasis on algorithms that can handle errors in the gradient, as may arise when the gradient is estimated. Insights into performance metrics and accompanying claims are provided, along with evidence of cases where algorithms that are provably convergent in batch optimization may perform poorly in an online regime. The role of distributed computation is discussed. Illustrative numerical examples for a number of applications of broad interest are provided to convey key ideas.

中文翻译:

使用信息流进行优化和学习:时变算法和应用

在使用数据流进行优化和学习的广泛领域中,有越来越多的跨学科努力,应用于传统批量优化技术由于计算和/或数据点的到达间隔时间而无法在与数据点的间隔时间匹配的时间尺度上生成解决方案的设置沟通瓶颈。特殊类型的在线算法可以处理这种情况,本文重点介绍此类时变优化算法,重点介绍机器学习 (ML) 和信号处理 (SP) 以及数据驱动控制 (DDC)。讨论了设计时变或在线一阶优化方法的方法,重点是可以处理梯度误差的算法,因为估计梯度时可能会出现。提供了对性能指标和随附声明的见解,以及在批处理优化中可证明收敛的算法在在线机制中可能表现不佳的情况的证据。讨论了分布式计算的作用。提供了许多广泛关注的应用的说明性数字示例,以传达关键思想。
更新日期:2020-05-01
down
wechat
bug