当前位置: X-MOL 学术IEEE Trans. Signal Inf. Process. Over Netw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Hessian Inversion-Free Exact Second Order Method for Distributed Consensus Optimization
IEEE Transactions on Signal and Information Processing over Networks ( IF 3.2 ) Pub Date : 2022-09-02 , DOI: 10.1109/tsipn.2022.3203860
Dusan Jakovetic 1 , Natasa Krejic 1 , Natasa Krklec Jerinkic 1
Affiliation  

We consider a standard distributed consensus optimization problem where a set of agents connected over an undirected network minimize the sum of their individual (local) strongly convex costs. Alternating Direction Method of Multipliers (ADMM) and Proximal Method of Multipliers (PMM) have been proved to be effective frameworks for design of exact distributed second order methods (involving calculation of local cost Hessians). However, existing methods involve explicit calculation of local Hessian inverses at each iteration that may be very costly when the dimension of the optimization variable is large. In this article, we develop a novel method, termed Inexact Newton method for Distributed Optimization (INDO), that alleviates the need for Hessian inverse calculation. INDO follows the PMM framework but, unlike existing work, approximates the Newton direction through a generic fixed point method (e.g., Jacobi Overrelaxation) that does not involve Hessian inverses. We prove exact global linear convergence of INDO and provide analytical studies on how the degree of inexactness in the Newton direction calculation affects the overall method's convergence factor. Numerical experiments on several real data sets demonstrate that INDO's speed is on par (or better) as state of the art methods iteration-wise, hence having a comparable communication cost. At the same time, for sufficiently large optimization problem dimensions $n$ (even at $n$ on the order of couple of hundreds), INDO achieves savings in computational cost by at least an order of magnitude.

中文翻译:

一种用于分布式共识优化的 Hessian 无反演精确二阶方法

我们考虑一个标准的分布式共识优化问题,其中一组通过无向网络连接的代理最小化其个体(本地)强凸成本的总和。乘法器的交替方向法 (ADMM) 和乘法器的近端法 (PMM) 已被证明是设计精确分布式二阶方法(涉及计算局部成本 Hessians)的有效框架。然而,现有方法涉及在每次迭代时显式计算局部 Hessian 逆,当优化变量的维度很大时,这可能会非常昂贵。在本文中,我们开发了一种新方法,称为分布式优化的不精确牛顿法 (INDO),它减少了对 Hessian 逆计算的需求。INDO 遵循 PMM 框架,但与现有工作不同,通过不涉及 Hessian 逆的通用定点方法(例如 Jacobi Overrelaxation)逼近牛顿方向。我们证明了 INDO 的精确全局线性收敛,并提供了关于牛顿方向计算中的不精确程度如何影响整体方法的收敛因子的分析研究。对几个真实数据集的数值实验表明,INDO 的速度与最先进的迭代方法相当(或更好),因此具有可比的通信成本。同时,对于足够大的优化问题维度 我们证明了 INDO 的精确全局线性收敛,并提供了关于牛顿方向计算中的不精确程度如何影响整体方法的收敛因子的分析研究。对几个真实数据集的数值实验表明,INDO 的速度与最先进的迭代方法相当(或更好),因此具有可比的通信成本。同时,对于足够大的优化问题维度 我们证明了 INDO 的精确全局线性收敛,并提供了关于牛顿方向计算中的不精确程度如何影响整体方法的收敛因子的分析研究。对几个真实数据集的数值实验表明,INDO 的速度与最先进的迭代方法相当(或更好),因此具有可比的通信成本。同时,对于足够大的优化问题维度$n$(即使在$n$几百个数量级),INDO 至少节省了一个数量级的计算成本。
更新日期:2022-09-02
down
wechat
bug