当前位置: X-MOL 学术Comput. Chem. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Benchmarking ADMM in nonconvex NLPs
Computers & Chemical Engineering ( IF 4.3 ) Pub Date : 2018-08-27 , DOI: 10.1016/j.compchemeng.2018.08.036
Jose S. Rodriguez , Bethany Nicholson , Carl Laird , Victor M. Zavala

We study connections between the alternating direction method of multipliers (ADMM), the classical method of multipliers (MM), and progressive hedging (PH). The connections are used to derive benchmark metrics and strategies to monitor and accelerate convergence and to help explain why ADMM and PH are capable of solving complex nonconvex NLPs. Specifically, we observe that ADMM is an inexact version of MM and approaches its performance when multiple coordination steps are performed. In addition, we use the observation that PH is a specialization of ADMM and borrow Lyapunov function and primal-dual feasibility metrics used in ADMM to explain why PH is capable of solving nonconvex NLPs. This analysis also highlights that specialized PH schemes can be derived to tackle a wider range of stochastic programs and even other problem classes. Our exposition is tutorial in nature and seeks to to motivate algorithmic improvements and new decomposition strategies



中文翻译:

在非凸NLP中对ADMM进行基准测试

我们研究乘数的交替方向方法(ADMM),乘数的经典方法(MM)和渐进套期保值(PH)之间的关系。这些连接用于导出基准度量和策略,以监视和加速收敛,并帮助解释为什么ADMM和PH能够解决复杂的非凸NLP。具体来说,我们观察到ADMM是MM的不精确版本,并且在执行多个协调步骤时会接近其性能。此外,我们使用PH是ADMM的专业化的观察结果,并借用了Lyapunov函数和ADMM中使用的原始对偶可行性度量来解释为什么PH能够解决非凸NLP。该分析还强调,可以派出专门的PH方案来解决更广泛的随机程序,甚至其他问题类别。

更新日期:2018-08-27
down
wechat
bug