当前位置: X-MOL 学术Optim. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Fast and stable nonconvex constrained distributed optimization: the ELLADA algorithm
Optimization and Engineering ( IF 2.1 ) Pub Date : 2021-01-03 , DOI: 10.1007/s11081-020-09585-w
Wentao Tang , Prodromos Daoutidis

Distributed optimization using multiple computing agents in a localized and coordinated manner is a promising approach for solving large-scale optimization problems, e.g., those arising in model predictive control (MPC) of large-scale plants. However, a distributed optimization algorithm that is computationally efficient, globally convergent, amenable to nonconvex constraints remains an open problem. In this paper, we combine three important modifications to the classical alternating direction method of multipliers for distributed optimization. Specifically, (1) an extra-layer architecture is adopted to accommodate nonconvexity and handle inequality constraints, (2) equality-constrained nonlinear programming (NLP) problems are allowed to be solved approximately, and (3) a modified Anderson acceleration is employed for reducing the number of iterations. Theoretical convergence of the proposed algorithm, named ELLADA, is established and its numerical performance is demonstrated on a large-scale NLP benchmark problem. Its application to distributed nonlinear MPC is also described and illustrated through a benchmark process system.



中文翻译:

快速稳定的非凸约束分布优化:ELLADA算法

使用多个计算代理以局部和协调的方式进行分布式优化是解决大规模优化问题(例如那些在大型工厂的模型预测控制(MPC)中产生的问题)的有前途的方法。但是,计算效率高,全局收敛,适合非凸约束的分布式优化算法仍然是一个未解决的问题。在本文中,我们结合了对乘数的经典交替方向方法的三个重要修改,以进行分布式优化。具体来说,(1)采用一种额外的层结构来适应非凸性并处理不等式约束,(2)允许近似解决等式约束的非线性规划(NLP)问题,(3)采用修改后的安德森加速度来减少迭代次数。建立了所提出算法ELLADA的理论收敛性,并在大规模NLP基准问题上证明了其数值性能。还通过基准过程系统描述和说明了其在分布式非线性MPC中的应用。

更新日期:2021-01-03
down
wechat
bug