Elsevier

Automatica

Volume 131, September 2021, 109737
Automatica

Brief paper
Distributed consensus-based solver for semi-definite programming: An optimization viewpoint

https://doi.org/10.1016/j.automatica.2021.109737Get rights and content

Abstract

This paper aims at the distributed computation for semi-definite programming (SDP) problems over multi-agent networks. Two SDP problems, including a non-sparse case and a sparse case, are transformed into distributed optimization problems, respectively, by fully exploiting their structures and introducing consensus constraints. Inspired by primal–dual and consensus methods, we propose two distributed algorithms for the two cases with the help of projection and derivative feedback techniques. Furthermore, we prove that the algorithms converge to their optimal solutions, and moreover, their convergences rates are evaluated by the duality gap.

Introduction

Distributed optimization has attracted intensive attention in the past decade, and its ideas have been applied to solving linear algebraic equations (Liu et al., 2019, Mou et al., 2015, Yuan et al., 2021) and matrix equations (Zeng et al., 2019) over multi-agent networks.

One of the most important problems related to matrix computation is semi-definite programming (SDP), which arises in a variety of applications. For instance, matrix rank minimization (Fazel, 2002), sensor network localization (Simonetto & Leus, 2014), and optimal power flow (Dall’Anese et al., 2013) can be cast into SDP. Due to its great importance, centralized algorithms such as the ellipsoid algorithm (Shor, 2012) and the interior point method (Vandenberghe & Boyd, 1996) have been developed. More recently, exploring distributed algorithms for SDP has received much research interest. Most of existing results were about SDP with sparsity, whose approaches generally consisted of two steps: transform the problem into a coupled one by exploiting its structure (Fukuda et al., 2001), and design distributed algorithms including first-order splitting methods (Dall’Anese et al., 2013, Simonetto and Leus, 2014, Sun et al., 2014, Zheng et al., 2020) and the primal–dual interior point method (Pakazad et al., 2014, Pakazad et al., 2017). In Sun et al. (2014), a first-order splitting algorithm was proposed for SDP with chordal sparsity. In Dall’Anese et al. (2013), a distributed alternating direction method of multipliers (ADMM) was explored and applied to state estimation for smart grids. In Simonetto and Leus (2014), the distributed ADMM was designed with great scalability after edge-based decomposition. In Zheng et al. (2020), an ADMM-based algorithm was designed for both primal and dual SDP problems with identifying infeasible problems. To reduce the computational complexity and improve the convergence rate, distributed primal–dual interior point methods were developed. In Pakazad et al. (2014), a proximal splitting method was designed for the primal–dual directions, but it required many iterations for accurate enough results. In Pakazad et al. (2017), a message passing algorithm was adopted to compute search directions and stepsizes with a fast convergence rate, but an inherent tree structure was required and agents must update sequentially following a tree. Therefore, it is worthwhile to explore effective distributed algorithms for the sparse SDP. In addition, the non-sparse SDP problems are also important with broad applications (Vandenberghe & Boyd, 1996).

In this paper, we investigate the distributed computation for both non-sparse and sparse SDP problems with the help of consensus-based methods. Main contributions are summarized as follows: (i) By fully exploiting their structures and introducing consensus constraints, we transform the two problems into distributed optimization problems. (ii) We propose two distributed primal–dual algorithms with projection operators and derivative feedbacks for positive semi-definite constraints and linear objective functions. (iii) We provide convergence and convergence rate analysis for the projected dynamics by Lyapunov approaches.

Notation: Let Rm×n be the set of m-by-n real matrices, 0m×n be m-by-n matrices with all entries being 0, and In be the n-by-n identity matrix. Denote Sn, S++n(S+n) as the set of n-by-n symmetric matrices, and positive (semi-) definite matrices, respectively. Denote X()0 as XS+n(S++n). Let Xij be the (i,j)-th entry of X, XT be the transpose of X, and XF be the Frobenius norm of X. Define the Frobenius inner product

(denoted by X,YF) as i,jXijYij. For XiRm×p, i{1,,n}, we denote col{X1,,Xn}Rmn×p as the matrix by stacking Xi together in columns. Let J be a set of positive integers J{1,,p}, and |J| be the number of elements in J. Define EJR|J|×p as the 0-1 matrix obtained from Ip with rows indexed by J. For XSp, EJXEJTS|J| contains rows and columns of X indexed by J.

Section snippets

Preliminary and formulation

In this section, we introduce some preliminary knowledge, and formulate the SDP problems.

A set ΩRm×n is convex if θX1+(1θ)X2Ω for any X1,X2Ω and θ[0,1]. A function f:ΩR is convex if Ω is a convex set, and f(θX1+(1θ)X2)θf(X1)+(1θ)f(X2) for any θ[0,1] and X1,X2Ω. Moreover, f() is strictly convex if the strict inequality holds whenever X1X2 and θ(0,1).

Let Ω be a subset of Rm×n. For XRm×n, the projection operator PΩ(X) is defined by PΩ(X)=argminYΩXYF. The distance of X to Ω is

Non-sparse SDP

In the non-sparse case, we transform (2) into a distributed optimization problem, and design a distributed algorithm. The multi-agent network, described by graph G, consists of m agents. The following assumption is well-known.

Assumption 2

Graph G is undirected and connected.

Under Assumption 2, by replacing the variable Z in (2) with Zi for i{1,,m}, we formulate a new problem as where aij is the (i,j)-th entry in the adjacency matrix of G. Under Assumption 2, j=1maij(ZiZj)=0 means Zi=Zj (Zeng et al., 2019

Sparse SDP

In the sparse SDP case, as discussed in Pakazad et al. (2017), we consider Fi=EJiTAiEJi,F0=i=1mEJiTMiEJi,where Ai,MiS|Ji|, Ji{1,,p}, |Ji| and EJiR|Ji|×p were defined in Section 1. Define ZJi=EJiZEJiT. Then the objective function and linear equality constraints of (2) are completely determined by ZJi (partial entries of Z). Remaining entries only affect whether Z is positive semi-definite. In the following, we focus on the positive semi-definite matrix decomposition.

In fact, a partial

Numerical examples

Here, two examples are given for illustration.

Example 2

Consider a multi-agent network of 50 agents for (3) to verify (6). F0, Fi and ci are randomly generated. Furthermore, under Assumption 2, the adjacency matrix of the multi-agent network is also randomly given.

Fig. 2(a) shows trajectories of

, and it verifies the convergence of (6). Moreover, the result implies that the solutions are consensus. Fig. 2(b) shows the trajectory of log(V(t)), where V(t) is defined in (8) and (Z,μ,Λ) is the

Weijian Li received the B.S. degree in mechanical engineering from Wuhan University of Technology, Wuhan, China, in 2016. He is currently pursuing the Ph.D. degree in operations research and cybernetics from the University of Science and Technology of China, Hefei, China. He is also a joint Ph. D. student in Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China.

His research interests include distributed optimization,

References (28)

  • KhalilH.K.

    Nonlinear Systems

    (2002)
  • KimS. et al.

    Exploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite matrix completion

    Mathematical Programming

    (2011)
  • LiangS. et al.

    Distributed nonsmooth optimization with coupled inequality constraints via modified Lagrangian function

    IEEE Transactions on Automatic Control

    (2017)
  • LiuQ. et al.

    A second-order multi-agent network for bound-constrained distributed optimization

    IEEE Transactions on Automatic Control

    (2015)
  • Cited by (0)

    Weijian Li received the B.S. degree in mechanical engineering from Wuhan University of Technology, Wuhan, China, in 2016. He is currently pursuing the Ph.D. degree in operations research and cybernetics from the University of Science and Technology of China, Hefei, China. He is also a joint Ph. D. student in Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China.

    His research interests include distributed optimization, distributed computation of network systems, and Bayesian signal processing.

    Xianlin Zeng received the B.S. and M.S. degrees in control science and engineering from the Harbin Institute of Technology, Harbin, China, in 2009 and 2011, respectively, and the Ph.D. degree in mechanical engineering from Texas Tech University, Lubbock, TX, USA, in 2015. He is currently an Associate Professor with the Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing, China.

    His current research interests include distributed optimization, distributed control, and distributed computation of network systems.

    Yiguang Hong received his B.S. and M.S. degrees from Peking University, China, and the Ph.D. degree from the Chinese Academy of Sciences (CAS), China. He is currently a professor of Shanghai Institute of Intelligent Science and Technology, Tongji University from Oct 2020, and was a professor of Academy of Mathematics and Systems Science, CAS.

    His current research interests include nonlinear control, multi-agent systems, distributed optimization/game, machine learning, and social networks.

    Prof. Hong serves as Editor-in-Chief of Control Theory and Technology. He also serves or served as Associate Editors for many journals, including the IEEE Transactions on Automatic Control, IEEE Transactions on Control of Network Systems, and IEEE Control Systems Magazine.

    He is a recipient of the Guan Zhaozhi Award at the Chinese Control Conference, Young Author Prize of the IFAC World Congress, Young Scientist Award of CAS, the Youth Award for Science and Technology of China, and the National Natural Science Prize of China. He is also a Fellow of IEEE, a Fellow of Chinese Association for Artificial Intelligence, and a Fellow of Chinese Association of Automation (CAA). Additionally, he is the chair of Technical Committee of Control Theory of CAA and was a board of governor of IEEE Control Systems Society.

    Haibo Ji was born in Anhui, China, in 1964. He received the B.Eng. degree and Ph.D. degree in Mechanical Engineering from Zhejiang University and Beijing University, in 1984 and 1990, respectively. He is currently a Professor in Department of Automation, University of Science and Technology of China, Hefei, China.

    His research interests include nonlinear control and optimization, and their applications in robots and UAVs.

    This work was supported by Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0100), the National Natural Science Foundation of China (Nos. 61733018, 61903027) and the Open Project Fund of National Defense Key Laboratory of Space Intelligent Control Technology (No. 6142208200312). The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Julien M. Hendrickx under the direction of Editor Christos G. Cassandras.

    View full text