当前位置: X-MOL 学术IEEE Trans. Autom. Control › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast Convergence Rates of Distributed Subgradient Methods With Adaptive Quantization
IEEE Transactions on Automatic Control ( IF 6.2 ) Pub Date : 8-4-2020 , DOI: 10.1109/tac.2020.3014095
Thinh T. Doan , Siva Theja Maguluri , Justin Romberg

We study distributed optimization problems over a network when the communication between the nodes is constrained, and therefore, information that is exchanged between the nodes must be quantized. Recent advances using the distributed gradient algorithm with a quantization scheme at a fixed resolution have established convergence, but at rates significantly slower than when the communications are unquantized. In this article, we introduce a novel quantization method, which we refer to as adaptive quantization, that allows us to match the convergence rates under perfect communications. Our approach adjusts the quantization scheme used by each node as the algorithm progresses: as we approach the solution, we become more certain about where the state variables are localized and adapt the quantizer codebook accordingly. We bound the convergence rates of the proposed method as a function of the communication bandwidth, the underlying network topology, and structural properties of the constituent objective functions. In particular, we show that if the objective functions are convex or strongly convex, then using adaptive quantization does not affect the rate of convergence of the distributed subgradient methods when the communications are quantized, except for a constant that depends on the resolution of the quantizer. To the best of our knowledge, the rates achieved in this article are better than any existing work in the literature for distributed gradient methods under finite communication bandwidths. We also provide numerical simulations that compare convergence properties of the distributed gradient methods with and without quantization for solving distributed regression problems for both quadratic and absolute loss functions.

中文翻译:


自适应量化分布式次梯度方法的快速收敛率



当节点之间的通信受到限制时,我们研究网络上的分布式优化问题,因此,节点之间交换的信息必须被量化。使用具有固定分辨率量化方案的分布式梯度算法的最新进展已经建立了收敛,但速度明显慢于通信未量化时的速度。在本文中,我们介绍了一种新颖的量化方法,我们将其称为自适应量化,它使我们能够在完美通信下匹配收敛速度。我们的方法随着算法的进展调整每个节点使用的量化方案:当我们接近解决方案时,我们更加确定状态变量的本地化位置并相应地调整量化器码本。我们将所提出方法的收敛速度限制为通信带宽、底层网络拓扑和组成目标函数的结构属性的函数。特别是,我们表明,如果目标函数是凸函数或强凸函数,那么在通信量化时,使用自适应量化不会影响分布式次梯度方法的收敛速度,除了取决于量化器分辨率的常数之外。据我们所知,本文中实现的速率优于文献中有限通信带宽下分布式梯度方法的任何现有工作。我们还提供数值模拟,比较有和没有量化的分布式梯度方法的收敛特性,以解决二次和绝对损失函数的分布式回归问题。
更新日期:2024-08-22
down
wechat
bug