当前位置: X-MOL 学术IEEE Trans. Signal Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robust Distributed Optimization With Randomly Corrupted Gradients
IEEE Transactions on Signal Processing ( IF 5.4 ) Pub Date : 2022-06-29 , DOI: 10.1109/tsp.2022.3185885
Berkay Turan 1 , Cesar A. Uribe 2 , Hoi-To Wai 3 , Mahnoosh Alizadeh 1
Affiliation  

In this paper, we propose a first-order distributed optimization algorithm that is provably robust to Byzantine failures–arbitrary and potentially adversarial behavior, where all the participating agents are prone to failure. We model each agent’s state over time as a two-state Markov chain that indicates Byzantine or trustworthy behaviors at different time instants. We set no restrictions on the maximum number of Byzantine agents at any given time. We design our method based on three layers of defense: 1) temporal robust aggregation, 2) spatial robust aggregation, and 3) gradient normalization. We study two settings for stochastic optimization, namely Sample Average Approximation and Stochastic Approximation. We provide convergence guarantees of our method for strongly convex and smooth non-convex cost functions.

中文翻译:

具有随机损坏梯度的鲁棒分布式优化

在本文中,我们提出了一种一阶分布式优化算法,该算法可证明对拜占庭式故障具有鲁棒性——任意和潜在的对抗性行为,其中所有参与代理都容易发生故障。我们将每个代理的状态随时间建模为两状态马尔可夫链,该链表示不同时刻的拜占庭或可信赖行为。在任何给定时间,我们都没有对拜占庭代理的最大数量设置任何限制。我们基于三层防御设计我们的方法:1)时间鲁棒聚合,2)空间鲁棒聚合,以及 3)梯度归一化。我们研究了随机优化的两种设置,即样本平均近似和随机近似。我们为强凸和平滑非凸成本函数的方法提供收敛保证。
更新日期:2022-06-29
down
wechat
bug