当前位置:
X-MOL 学术
›
arXiv.cs.MA
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Network Optimization via Smooth Exact Penalty Functions Enabled by Distributed Gradient Computation
arXiv - CS - Multiagent Systems Pub Date : 2020-11-08 , DOI: arxiv-2011.04100 Priyank Srivastava and Jorge Cortes
arXiv - CS - Multiagent Systems Pub Date : 2020-11-08 , DOI: arxiv-2011.04100 Priyank Srivastava and Jorge Cortes
This paper proposes a distributed algorithm for a network of agents to solve
an optimization problem with separable objective function and locally coupled
constraints. Our strategy is based on reformulating the original constrained
problem as the unconstrained optimization of a smooth (continuously
differentiable) exact penalty function. Computing the gradient of this penalty
function in a distributed way is challenging even under the separability
assumptions on the original optimization problem. Our technical approach shows
that the distributed computation problem for the gradient can be formulated as
a system of linear algebraic equations defined by separable problem data. To
solve it, we design an exponentially fast, input-to-state stable distributed
algorithm that does not require the individual agent matrices to be invertible.
We employ this strategy to compute the gradient of the penalty function at the
current network state. Our distributed algorithmic solver for the original
constrained optimization problem interconnects this estimation with the
prescription of having the agents follow the resulting direction. Numerical
simulations illustrate the convergence and robustness properties of the
proposed algorithm.
中文翻译:
通过分布式梯度计算实现的平滑精确惩罚函数进行网络优化
本文提出了一种用于代理网络的分布式算法,以解决具有可分离目标函数和局部耦合约束的优化问题。我们的策略基于将原始约束问题重新表述为平滑(连续可微)精确惩罚函数的无约束优化。即使在原始优化问题的可分离性假设下,以分布式方式计算该惩罚函数的梯度也具有挑战性。我们的技术方法表明,梯度的分布式计算问题可以表述为由可分离问题数据定义的线性代数方程组。为了解决这个问题,我们设计了一个指数级快速、输入到状态稳定的分布式算法,它不需要单个代理矩阵是可逆的。我们采用这种策略来计算当前网络状态下惩罚函数的梯度。我们针对原始约束优化问题的分布式算法求解器将这种估计与让代理遵循结果方向的处方相互关联。数值模拟说明了所提出算法的收敛性和鲁棒性。
更新日期:2020-11-10
中文翻译:
通过分布式梯度计算实现的平滑精确惩罚函数进行网络优化
本文提出了一种用于代理网络的分布式算法,以解决具有可分离目标函数和局部耦合约束的优化问题。我们的策略基于将原始约束问题重新表述为平滑(连续可微)精确惩罚函数的无约束优化。即使在原始优化问题的可分离性假设下,以分布式方式计算该惩罚函数的梯度也具有挑战性。我们的技术方法表明,梯度的分布式计算问题可以表述为由可分离问题数据定义的线性代数方程组。为了解决这个问题,我们设计了一个指数级快速、输入到状态稳定的分布式算法,它不需要单个代理矩阵是可逆的。我们采用这种策略来计算当前网络状态下惩罚函数的梯度。我们针对原始约束优化问题的分布式算法求解器将这种估计与让代理遵循结果方向的处方相互关联。数值模拟说明了所提出算法的收敛性和鲁棒性。