Abstract
To solve a distributed optimal resource allocation problem, a collective neurodynamic approach based on recurrent neural networks (RNNs) is proposed in this paper. Multiple RNNs cooperatively solve a global constrained optimization problem in which the objective function is a total of local non-smooth convex functions and is subject to local convex sets and a global equality constraint. Different from the projection dynamics to deal with local convex sets in the existing work, an internal dynamics with projection output is designed in the algorithm to relax the Slater’s condition satisfied by the optimal solution. To overcome continuous-time communication in a group of RNNs, an aperiodic communication scheme, called the event-triggered scheme, is presented to alleviate communication burden. It is analyzed that the convergence of the designed collective neurodynamic approach based on the event-triggered communication does not rely on global information. Furthermore, it is proved the freeness of the Zeno behavior in the event-triggered scheme. Two examples are presented to illustrate the obtained results
Similar content being viewed by others
Introduction
With wide applications in science and engineering, such as communication networks [1, 2], smart grids [3,4,5], and traffic networks [6], the optimal resource allocation has attracted much attention. Distributed resource allocation, as a class of constrained optimization problems, is to find an optimal solution that minimizes a global function subject to both global equality constraints and local convex sets in a distributed manner. In the framework of multi-agent systems, multiple agents, who get access to local objective functions and constraints, and cooperate to seek the optimal solution to the resource allocation problem by distributed algorithms.
Many studies on the design of distributed algorithms have been reported to find a solution to the optimal resource allocation problem (see [3, 7,8,9,10,11,12] and the references therein). A distributed gradient algorithm with the help of the projection dynamics to handle local convex sets was designed for multi-agent systems with smooth objective functions [3]. The related results were also extended to non-smooth functions [7]. A robust distributed algorithm was developed for the distributed resource allocation with uncertain allocation parameters [8]. For time-varying objective functions, distributed algorithms based on the prediction-correction method and a non-smooth consensus estimator were presented to obtain the time-varying optimal allocation [9, 10]. Taking agents with inherent dynamics into consideration, some distributed algorithms for resource allocation were proposed for double-integrator and multi-integrator systems to obtain the optimal allocation [7, 13,14,15,16].
In the last decades, numerous researchers have investigated continuous-time recurrent neural networks (RNNs) to solve various optimization problems (see [17,18,19, 43, 49] and the references therein), since that continuous-time RNNs can be easily implemented by analog circuits. In virtue of the development of multi-agent systems, multiple RNNs have been presented to cooperatively handle distributed optimization problems in recent years. A collective neurodynamic approach was presented to find a solution to a distributed constrained optimization problem [20]. From then on, some different neurodynamic approaches have been designed to resolve the distributed (constrained) optimization with non-smooth optimization functions [21,22,23,24,25,26,27,28, 30]. Resorting to the consensus protocol, a neurodynamic approach was presented to track the time-varying solution of a distributed time-varying optimization problem subject to inequality constraints [29]. To distribute chiller loading with nonconvex power consumption functions, a collaborative neurodynamic approach was proposed in [53]. For multicluster games with nonconvex functions, a collaborative neurodynamic approach was designed to seek Nash equilibrium [54]. For the distributed resource allocation problem that this paper focuses on, a neurodynamic approach based on the projected dynamics was designed in [31]. Based on the primal-dual framework, distributed accelerated neurodynamic approaches were proposed for the resource allocation problem [55]. To avoid the disclosure of global information, an adaptive neurodynamic approach was presented to solve distributed resource allocation problems [56]. Note that the above-mentioned literatures only focus on the design and convergence performance of neurodynamic approaches rather than the difficulty in the implementation of continuous-time communication in the designed collaborative neurodynamic approaches.
In general, the collective continuous-time RNNs can be regarded as a dynamical system. From the point of control theory, continuous-time communications, required in the collective continuous-time RNNs, may lead to high and redundant energy consumptions. Different from the periodic sampling communication scheme, event-triggered communication schemes [32,33,34, 51], as an energy-efficient strategy, have been extensively studied in multi-agent systems for past decades. In recent years, event-triggered schemes have been further applied in microgrids [44], servo systems [47], fuzzy systems [50], reinforcement learning [45], and neural networks [34, 48]. It is known that the times when agents (RNNs) communicate with their neighbors, that is, the times when agents transmit and/or receive information are determined by a predefined triggering condition, called event-triggered communication schemes. Thus, under the condition that the performance of the multi-agent systems is ensured, event-triggered schemes can efficiently reduce superfluous utilization of computation resources and communication bandwidths. Recently, for the synchronization of multiple fractional-order RNNs, an event-triggered communication scheme was presented to alleviate communication burden in a network of RNNs [34]. An event-triggered collaborative neurodynamic approach was presented to deal with distributed global optimization [52]. However, the existing work on the collective RNNs for distributed resource allocation does not consider event-triggered communication schemes.
Motivated by the nonsmoothness, as a natural property, in resource allocation problems, including multi-robot coordination problem [35] and piecewise cost function in economic dispatch of smart grids [36], a collaborative neurodynamic approach is studied in this paper to find a solution to distributed resource allocation problem with non-smooth objective functions. In consideration of the application potential of RNNs for distributed resource allocation and the case of the communication network with limited resources, it is practically important to further explore the collaborative neurodynamic approach for distributed resource allocation by using event-triggered communication schemes. In this paper, a collaborative neurodynamic approach is presented to cope with a distributed optimal resource allocation problem in which local objective functions are non-smooth and are subject to local convex sets. In summary, the main contributions are listed below.
-
1.
In comparison to the existing collaborative neurodynamic approaches for distributed resource allocation [31, 37, 55, 56], our proposed model of the neural network has a simple structure with less internal states and interactive information for RNNs with non-smooth objective functions.
-
2.
An aperiodic communication scheme called the event-triggered scheme, is designed in the collective neurodynamic model to reduce the communication loads among RNNs and to get rid of the continuous-time communications required in [28, 31, 37, 55]. Due to discrete-time interactive information used in the collective neurodynamic model, the convergence analysis of the collective neurodynamic model becomes more difficult than that in [28, 31, 37, 55].
-
3.
Different from distributed algorithms based on the primal-dual dynamics [3, 7,8,9,10, 28, 37, 55], our proposed neurodynamic approach is designed on the basis of the KKT condition and a consensus protocol. Moreover, the convergence of the presented approach is not established on the global information, which is required in [3, 7,8,9,10].
The rest of this paper is arranged as follows. The formulated problem is given in the next section. The model of RNNs is constructed in the following section. Next section gives the analyses of the optimality and convergence of the proposed neurodynamic approach. Following section shows numerical examples. Last section gives the conclusion and the future work.
Notations: \(\mathbb {R}\) is the real number set. The space of n-dimensional real vectors is denoted by \(\mathbb {R}^n\). The set of \(n \times m\) real matrices is defined by \(\mathbb {R}^{n\times m}\). The Euclidean norm of a vector \(x\in \mathbb {R}^n\) is \(\Vert x\Vert \). The spectral norm and the transpose of matrix \(A\in \mathbb {R}^{n\times n}\) are denoted by \(\Vert A\Vert \) and \(A^T\), respectively. \(A\otimes B\) denotes the Kronecker product of matrices A and B. Let \(\textrm{col}(x_1,\ldots ,x_n)=[x_1^T,\ldots ,x_n^T]^T\). An n-dimensional column vectors with all elements being 1 (0) is denoted by \(1_n\) (\(0_n\)). For a non-smooth function \(f(x):\mathbb {R}^n \rightarrow \mathbb {R}\), \(\partial f(x) \in \mathbb {R}^n\) denoted the subdifferential of function f(x) at x.
Problem formulation
In this section, a collective neurodynamic network with N RNNs is considered to solve a non-smooth resource allocation problem, given by
In (1), \(x_i\in \mathbb {R}^n\) is the decision (output) of the i-th RNN, \(f_i(x_i):\mathbb {R}^n\rightarrow \mathbb {R}\) is the ith RNN’s objective function, \(a_i>0\), \(b_i\in \mathbb {R}^n\), and \(\Omega _i\in \mathbb {R}^n\) is a convex set which is nonempty and closed. Let \(\Omega =\Omega _1\times \cdots \times \Omega _N\). Assume that the communication network of the N RNNs is described by a directed graph (digraph) \(\mathcal {G}=(\mathcal {I},\mathcal {E})\). RNNs correspond to the nodes of graph \(\mathcal {G}\), denoted by set \(\mathcal {I}=\{1,\ldots ,N\}\). Communication links among RNNs are described by edges of graph \(\mathcal {G}\), denoted by set \(\mathcal {E}\). For example, if RNN j can receive the information from RNN i, there is an edge (i, j) in the set \(\mathcal {E}\). There is a directed spanning tree in graph \(\mathcal {G}\) if one node is the root of directed paths to every other node. Define \(\mathcal {A}=[w_{ij}]\in \mathbb {R}^{N\times N}\) as the adjacency matrix of digraph \(\mathcal {G}\), where \(w_{ij}>0\) is the weight on link (i, j). \(D_\textrm{out}=\)diag\(\{d_\textrm{out}^1,\ldots ,d_\textrm{out}^N\}\) is the out-degree matrix, where \(d_\textrm{out}^i=\sum _{j\ne i}w_{ij}, \forall i \in \mathcal {I}\). \(L=D_\textrm{out}-\mathcal {A}\in \mathbb {R}^{N\times N}\) is the Laplacian matrix of graph \(\mathcal {G}\). A digraph is strongly connected and weighted-balanced if and only if \(1_N^TL=0_N^T\) and \(L+L^T\) is positive semidefinite [38].
Remark 1
In the economic dispatch of smart grids, the generation cost of the ith generation system is described by \(f_i\), and \(\Omega _i\) characterizes the limited capacity of power generation of the ith system. The global equality constraint \(\sum _{i=1}^Na_ix_i=\sum _{i=1}^Nb_i\) expresses the balance of supply and demand, in which \(a_i\) denotes the proportion of the power generated by the ith generation unit, and \(b_i\) represents the demand of the ith load.
The following assumptions are widely given in [3, 7,8,9,10] and are considered throughout this paper for the design and analysis of the neural network.
Assumption 1
The function \(f_i\) is convex in \(x_i\in \Omega _i\), for all \(i\in \mathcal {I}\).
Assumption 2
There is an optimal solution \(x^*\in \Omega \) of problem (1).
Assumption 3
The weight-balanced communication graph \(\mathcal {G}\) has a spanning tree.
Remark 2
Although many neurodynamic approaches (see [21,22,23,24,25,26] and the references therein) have been presented to find a solution to the distributed optimization subject to both equality constraints and local convex sets, the ith RNN underlying these approaches obtains the global optimal solution \(x^*\) to problem (1) rather than local optimal solution \(x_i^*\), \(i\in \mathcal {I}\), in the global sense. Then, compared with these approaches for problem (1), our proposed neurodynamic approach has a simple structure with low dimension.
Model of neurodynamic approach
In this section, inspired by [5], a network of RNNs is modeled to cooperatively find an optimal solution to the resource allocation problem (1). For the ith RNN, its local information includes objective function \(f_i(x_i)\), local convex set \(\Omega _i\), \(a_i\), and \(d_i\) in constraint \(\sum _{i=1}^Na_ix_i=\sum _{i=1}^Nb_i\). Define \(x_i\in \Omega _i\) as the output of the ith RNN. \(x=\)col\((x_1,\ldots ,x_N)\) is an estimation of the optimal solution of problem (1). The dynamic equation of the ith RNN is modeled by
(1) State equation
(2) Output equation
where \(\sigma >0\), \(w_{ij}\) is the weight on communication link \((i,j)\in \mathcal {E}\) between the ith RNN and the jth RNN, and \(\gamma _i\in \partial f_i(x_i)\) is the subgradient of \(f_i\) at \(x_i\). \(P_{\Omega _i}(y_i)\) is projection output of the ith RNN and is defined by \(P_{\Omega _i}(y_i)=\inf _{z_i\in \Omega _i}\Vert z_i-y_i\Vert \). \(\hat{\lambda }_i=\lambda _i(t_{k_i}^i)\) denotes the information of the i-th RNN broadcasted to the network at time \(t_{k_i}^i, k_i=1,2,\ldots \). Then, time \(t_{k_i+1}^i\) is decided by
In (4), \(\zeta _{i1}>0\) and \(0<\zeta _{i2}<\varrho _2(L)\), where \(\varrho _2(L)\) is the second smallest eigenvalue of L. \(\Vert \hat{\lambda }_i-\lambda _i(t)\Vert \) denotes the measurement error. Once the measurement error is grater than the threshold value \(\zeta _{i1}e^{-\zeta _{i2}t}\), the trigger transmits the latest information \(\lambda _i(t_{k_i+1}^i)\) to the network.
Figure 1 shows the structure of the ith RNN, \(i\in \mathcal {I}\), modeled by (2)–(3). It is seen from Fig. 1 that the ith RNN updates its state only by local information containing its own information and neighboring RNNs’ transmitting information. Moreover, the interchanging information among RNNs is state \(\lambda _i\), \(i\in \mathcal {I}\), rather than decision \(x_i\), \(i\in \mathcal {I}\), thus, the privacy of each RNN’s decision is protected. In addition, since the proposed model of RNNs with a simple structure can be implemented by programmable circuits with the composition of amplifiers and operators of addition, projection and integration, the designed neurodynamic approach can be applied to handle the distributed resource allocation in the large-scale systems. Let \(y=\)col\((y_1,\ldots ,y_N)\), \(\lambda =\)col\((\lambda _1,\ldots , \lambda _N)\), \(\hat{\lambda }=\)col\((\hat{\lambda }_1,\ldots ,\hat{\lambda }_N)\), \(a=\)diag\(\{a_1,\cdots ,a_N\}\otimes I_n\), \(\gamma =\)col\((\gamma _1,\ldots ,\gamma _N)\), and \(P_{\Omega }(y)=\)col\((P_{\Omega _1}(y_1),\ldots ,P_{\Omega _N}(y_N))\). Thus, the model of the collective neurodynamic network is given by
(1) State equation
(2) Output equation
Theoretical analysis
In this section, to ensure that the outputs of RNNs reach the optimal solution \(x^*\) to problem (1), there are two issues to be tackled: (1) the relationship between the outputs of RNNs and the optimal solution \(x^*\); (2) the convergence of the outputs of RNNs to the optimal solution \(x^*\). In what follows, the two issues will be discussed in detail.
Optimality analysis
Under Assumption 1, problem (1) is a constrained convex optimization problem. The optimality condition is given in Lemma 1.
Lemma 1
Under Assumptions 1–2, \(x^*\) is an optimal solution to problem (1), if and only if
where \(\gamma _i^*\in \partial f_i(x_i^*)\), \(\forall i\in \mathcal {I}\).
The conclusion is directly obtained by Karush-Kuhn-Tucker (KKT) conditions [39]. In the power market, (7) indicates the electricity price at the optimal power allocation \(x^*\). The relationship between the outputs of RNNs and optimal solution \(x^*\) is analyzed in Lemma 2.
Lemma 2
Under Assumptions 1–3, \(x^*\in \Omega \) is an optimal solution to problem (1) if and only if there exists an equilibrium point \((y^*,\lambda ^*)\) of system (5) with output \(x^*=P_{\Omega }(y^*)\), where \(y^*\in \mathbb {R}^{Nn}\) and \(\lambda ^*=\bar{\lambda }1_N\) for some \(\bar{\lambda }\in \mathbb {R}^n\).
Proof
-
(1)
Necessary: Assume that \((y^*,\lambda ^*)\) is an equilibrium point of system (5), satisfying
$$\begin{aligned} \begin{aligned} 0_{Nn}&=(L\otimes I_n)\lambda ^*,\\ 0_{Nn}&=-\sigma \lambda ^*-a^{-1}\gamma ^*. \end{aligned} \end{aligned}$$(8)Under Assumption 3, it follows from the first equation in (8) that \(\lambda _i^*=\lambda _j^*=\bar{\lambda }\) for some \(\bar{\lambda }\in \mathbb {R}^n\). Under Assumptions 1 and 2, it further from the second equation of (8) yields that \(a_i^{-1}\gamma _i^*=a_j^{-1}\gamma _j^*\) with \(\gamma _i^*\in \partial f_i(x_i^*)\) and \(\gamma _j^*\in \partial f_j(x_j^*)\), which satisfies the optimal condition (7) in Lemma 1. Therefore, \(x^*=P_{\Omega }(y^*)\) is an optimal solution to problem (1).
-
(2)
Sufficiency: Assume that \(x^*\) is an optimal solution to problem (1) by Assumption 2. Under Assumption 1, define \(\lambda _i=a_i^{-1}\gamma _i\) with \(\gamma _i\in \partial f_i(x_i)\). According to Lemma 1, it yields that \(\lambda _i^*=\lambda _j^*\), for \(i,j\in \{1,\ldots ,N\}\), which satisfies the first equation in (8) by Assumption 3. Since \(x=P_{\Omega }(y)\) is absolutely continuous, it follows that \(x(t)-x(0)=P_{\Omega }(y(t))-P_{\Omega }(y(0))\). If y(t) is a complete solution of (5) evolving on the largest invariant set \(\mathcal {M}=\{y^*,\lambda ^*\}\), \(\dot{y}=0\) holds for almost all \(t\ge 0\). Since \(y(t)=y^*\) by invariance, \(x^*=P_{\Omega }(y^*)\) satisfies the second equation in (8). Therefore, \((y^*,\lambda ^*)\) is an equilibrium point of system (5).
\(\square \)
It is known from Lemma 2 that output x(t) of RNNs can arrive at \(x^*\), if state y(t) of (5)–(6) converges to its equilibrium point \(y^*\).
Convergence analysis
At first, the convergence of state \((y,\lambda )\) and output x of the neural network is analyzed. Then, the analysis of the Zeno behavior in event-triggering rule (4) is given.
Theorem 3
Suppose that Assumptions 1–3 hold. If the initial condition satisfies \((y(0),\lambda (0))\in \{(y,\lambda )\in \mathbb {R}^{2Nn} \mid \sum _{i=1}^N a_iy_i(0)=\sum _{i=1}^Nb_i\}\), state \((y(t),\lambda (t))\) of system (5) converges to \((y^*,\lambda ^*)\) and output x(t) reaches to \(x^*\).
Proof
Define the measurement error as \(e_i=\hat{\lambda }_i-\lambda _i\) for \(i\in \{1,\ldots ,N\}\). Denote \(e=\)col\((e_1,\ldots , e_N)\). Thus, the model of the collective neurodynamic network is rewritten by
-
(1)
State equation
$$\begin{aligned} \begin{aligned} \frac{\textrm{d}y}{\textrm{d}t}&=(L\otimes I_n)\lambda +(L\otimes I_n)e,\\ \frac{\textrm{d}\lambda }{\textrm{d}t}&\in -\sigma \lambda -a^{-1}\partial f(x), \end{aligned} \end{aligned}$$(9) -
(2)
Output equation
$$\begin{aligned} x=P_{\Omega }(y). \end{aligned}$$(10)
\(\square \)
Regard (9)–(10) as a dynamic system, where y and \(\lambda \) are states, and x is the output of the system. Then, the convergence of the collective neurodynamic network’s output can be analyzed by the stability of the dynamic system via the theory of nonlinear dynamical systems. Consider a Lyapunov function \(V=\frac{1}{2}\lambda ^T(L^TL\otimes I_n)\lambda +E(y)\), where \(E(y):\mathbb {R}^{Nn}\rightarrow \mathbb {R}\) is defined as \(\partial E(y)=a^{-1}(L\otimes I_n)\partial f(P_{\Omega }(y))\) with \(\partial f_N(P_{\Omega }(y))=\)col\((\partial f_1(P_{\Omega _1}(y_1)),\ldots , f_N(P_{\Omega _N}(y_N)))\). Since matrix \(L^TL\) is symmetric and semidefinite. It yields that
Since \(\gamma \) is bounded, it follows from event-triggering rule (4) that
where \(\zeta _1=\max \{\zeta _{11},\ldots ,\zeta _{N1}\}\) and \(\zeta _2=\min \{\zeta _{12},\ldots ,\zeta _{N2}\}\). It follows from (12) that V(t) decreases as \(t\rightarrow \infty \), i.e., \(V(t)\le V(0)\). Thus, \(\frac{1}{2}\Vert (L\otimes I_n)\lambda (t)\Vert ^2\le V(0)-E(y)\), which indicates that \(\Vert (L\otimes I_n)\lambda \Vert \) is bounded. Since \(\gamma \in \partial f(x)\) is bounded, it is derived from the second equation in (9) that
Since \(\lambda (t)\) is bounded, it follows from the second equation in (9) that \(\dot{\lambda }\) is bounded, which further yields that \((L\otimes I_n)\dot{\lambda }\) is bounded. By the Barbalat’s lemma in [40] that \(\lim _{t\rightarrow \infty } (L\otimes I_n)\lambda (t)=0_{Nn}\), which indicates that \(\lim _{t\rightarrow \infty } \lambda _i(t)=\lambda _j(t)=\bar{\lambda }\), for some \(\bar{\lambda }\in \mathbb {R}^n\). From \(\lim _{t\rightarrow \infty } (L\otimes I_n)\lambda (t)=0_{Nn}\), we know that \(\dot{y}\) is bounded. It is derived that \(y(t)=\int _{0}^t (L\otimes I_n)\lambda (\tau )\textrm{d}\tau \). Thus, we have \(\lim _{t\rightarrow \infty } y(t)=y^*\). Therefore, as \(t\rightarrow \infty \), \((y,\lambda )\) asymptotically converges to \((y^*,\lambda ^*)\). Since y(t) converges to the equilibrium point \(y^*\) such that \(a_i^{-1}\gamma _i^*=a_j^{-1}\gamma _j^*\) holds. According to Lemma 2, x(t) asymptotically converges to \(x^*=P_{\Omega }(y^*)\), which is the optimal solution to problem (1). \(\blacksquare \)
Theorem 4
For the proposed neurodynamic model (2)–(3), the freeness of the Zeno behavior is ensured in event-triggered rule (4).
Proof
Similar to [41], the Zeno behavior is analyzed by the positive inter-event intervals. For the ith RNN, the dynamics of measurement error \(e_i\) is \(\dot{e}_i=\dot{\lambda }_i\). It yields that
where \(\gamma _i\in \partial f_i(x_i)\). According to the proof of Theorem 3, the trajectories \(\lambda _i(t)\), \(i\in \mathcal {I}\), of system (5) are bounded. Then, there is a constant \(Q>0\) such that \(\frac{\textrm{d}\Vert e_i\Vert }{\textrm{d}t} \le Q\). Let \(t_{k_i}^i\) be the latest triggering time. Then, the upper bound of \(\frac{\textrm{d}\Vert e_i\Vert }{\textrm{d}t}\) is \(\frac{\textrm{d}\Vert e_i\Vert }{\textrm{d}t}\Vert \le Qt_{k_i}^i\). Thus, It yields that \(\Vert e_i(t)\Vert \le Q(t-t_{k_i}^i)\). The next event will be triggered when \(\Vert e_i\Vert =\zeta _{i1}e^{-\zeta _{i2}t}\). Thus, the inter-event interval between two consecutive events is denoted by \(T_k^i=t_{k_i+1}^i-t_{k_i}^i\), which is derived by \(\zeta _{i1}e^{-\zeta _{i2}t_{k_i+1}^i}\le QT_k^i\). Since \(\zeta _{i1}e^{-\zeta _{i2}t}>0\) for any \(t\ge 0\), it yields that \(T_{k}^i>0\) for any given k, and for \(i\in \{1,\ldots ,N\}\). Therefore, the Zeno behavior is excluded in the event-triggered rule (4). \(\square \)
Discussions
In this section, the influence of parameters on the convergence performance is discussed in Remark 3. The comparisons with the related works are discussed in Remarks 4–7, respectively. The computational complexity of the designed neurodynamic approach is analyzed in Remark 8.
Remark 3
According to the proof of Theorem 3, parameter \(\sigma \) has impacts on consensus convergence rate and the steady error. Grate \(\sigma \) can increase the convergence rate and decrease the steady error. In event-triggered rule (4), parameters \(\zeta _{i1}\) and \(\zeta _{i2}\) affect the communication frequency. For fixed \(\zeta _{i2}\), small \(\zeta _{i1}\) can shorten the inter-event intervals [46].
Remark 4
Although this paper and [5] studied the distributed resource allocation problem, there are three main differences between model (2)–(3) and the method presented in [5]. In detail, the first one is that a more general problem (1) with local convex sets is studied in this paper. The studied problem in [5] is a special case of problem (1) in which \(a_i=1, \forall i\in \mathcal {I}\). The second one is the method to handle local convex sets. Different from the exact penalty method used in [5], an internal dynamics with projection output is proposed in this paper. The third one is the communication scheme. An aperiodic communication scheme called the event-triggered scheme, is designed in RNNs’ model (2)–(3) for the reduction of communication loads incurring by the continuous-time communication required in [5].
Remark 5
Compared with the neurodynamic approaches designed in [31, 37, 55, 56] to solve the distributed resource allocation, the model of RNNs presented in this paper has a more simple structure with less internal variables. Moreover, the assumption that a feasible solution exists in the interior of \(\Omega \) utilized in [31, 37, 55, 56] is relaxed. In contrast to the continuous-time communication, as a requirement for the implementation of the neurodynamic approaches in [31, 37, 55], an asynchronous discrete-time communication scheme based on the event-triggered scheme, is designed in the neurodynamic approach (2)–(3) to reduce communication loads. Different from the event-triggering rule [56] depending on neighboring RNNs’ triggered states, the event-triggered scheme (4) is only determined by each RNN’s own state such that there is no need to monitor neighbors’ triggering information in real time.
Remark 6
Similar event-triggered communication schemes are designed in [34] and this paper. In contrast to the synchronization of recurrent neural networks studied in [34], a class of distributed constrained optimization problem with non-smooth objective functions subject to global constraints and local convex sets is considered in this paper. An explicit event-triggering function is given in (4). Different selections of parameters \(\zeta _{i1}\) and \(\zeta _{i2}\) may affect the inter-event intervals, and thus, adjust the communication frequency.
Remark 7
Although this paper and the related work [3, 7,8,9,10, 28, 37] study the distributed resource allocation problem, the significant advantage of our proposed neurodynamic approach is the simple structure with less internal states and interactive information. Moreover, the proposed neurodynamic approach is fully distributed, that is, the convergence of RNNs’ outputs does not rely on global information, such as the knowledge of the communication network.
Remark 8
The solution of the non-smooth resource allocation problem (1) exits. Thus, from the perspective of computational complexity, the designed neurodynamic approach (2)–(3) is polynomial-order complexity. From the point of control systems, the designed neurodynamic approach (2)–(3) has the ability to tolerant the parameter uncertainty. The robustness of the designed neurodynamic approach (2)–(3) to the model parameter uncertainty can be analyzed in our subsequent studies.
Numerical simulations
In this section, two examples are given to verify the proposed neurodynamic approach.
Example 1
The distributed economic dispatch in smart grids is considered here. Figure 2a shows a smart grid composed of four buses. The four buses exchange information on a graph \(\mathcal {G}\) given in Fig. 2b. Each bus has a cost function [7, 42], given by
The economic dispatch problem is
where \(\Omega _i\) denotes local generation capacity of the i-th generator. Parameters \(\alpha _i,\beta _i,c_i,\kappa _i\) and \(\Omega _i\), \(i\in \{1,2,3,4\}\) are given in Table 1. According to Lemma 1, the optimal solution exists and is \(x^*=[22.1,31.7,68.4,17.8]^T\). It follows from Theorem 3 that the outputs of RNNs can reach to the optimal solution \(x^*\).
The initial conditions are \(x(0)=y(0)=[30, 35, 55, 20]^T\) and \(\lambda (0)=[0,0,0,0]^T\). In dynamics of RNNs (2), select \(\sigma =1\). For the event-triggered communication scheme (4), choose \([\zeta _{11},\zeta _{21},\zeta _{31},\zeta _{41}]^T=[0.8,0.5,0.5,0.8]^T\) and \([\zeta _{12},\zeta _{22},\zeta _{32},\zeta _{42}]^T=[0.1,0.15,0.15,0.25]^T\). The four RNNs’ outputs obtained by (2) are shown in Fig. 3, where the solid lines are the four RNNs’ outputs and the dotted lines are the optimal decisions. It indicates that four RNNs’ outputs reach to the optimal solution \(x^*\). In Fig. 4, the state \(\lambda _i(t)\) of the ith RNN, \(i=\{1,2,3,4\}\), updated by (2), becomes consensus, which implies that (7) holds at the steady state. The event-triggered time sequences of the four RNNs are given in Fig. 5, from which the inter-event intervals are positive. It further indicates the freeness of the Zeno behavior in the designed event-triggering rule (4), which validates the results given in Theorem 4. Compared with the periodic communication with \(T=0.1s\), the number of communication between the periodic and event-triggered communications is shown in Fig. 6, from which event-triggered scheme (4) effectively reduces the number of communication such that the communication load is alleviated. To show that the event-triggered communication has little influence on the convergence performance, Fig. 7 depicts the outputs of the four RNNs under continuous-time and event-triggered communication, respectively. It is seen from Fig. 7 that the convergence performance of neurodynamic model (2)–(3) with event-triggered communication scheme (4) is similar to that of the model with continuous-time communication.
Example 2
As an illustration, the economic dispatch of the IEEE 30-bus system [3] is considered. The cost function of each power generation is the same as (15). Table 2 shows the parameters in cost functions. Figure 8 depicts the communication graph of the IEEE 30-bus system. The balance between supply and demand is described by \(\sum _{i=1}^6x_i=290\). The initial conditions are \(x(0)=y(0)=[126,78,48,12,12,14]^T\) and \(\lambda (0)=[0,0,0,0,0,0]^T\). According to Lemma 1, the optimal solution of problem (1) exists and is \(x^*=[174.9,44.67,18.52,10.49,10.49,31.35]^T\). Set \(\sigma =0.5\), \(\zeta _{i1}=0.1\), and \(\zeta _{i2}=0.01\), for all \(i=\{1,\ldots ,6\}\) in (2) and (4), respectively. The outputs of the six RNNs, regulated by (2), are shown in Fig. 9a, which indicates that the proposed neurodynamic approach finds the optimal solution \(x^*\). The results given in Theorem 3 is validated by Fig. 9. The effect of different \(\sigma \) on the convergence rate of the six RNNs’ outputs is shown in Fig. 9, from which the six RNNs’ outputs reach to \(x^*\) before \(t=100s\) when \(\sigma =0.3\), and for \(\sigma =0.1\) the six RNNs’ outputs reach to \(x^*\) after \(t=100s\). It indicates that the convergence rate becomes lower when parameter \(\sigma \) is smaller. The analysis of the inter-event intervals of the six RNNs is shown in Table 3, from which the event-triggered scheme is an aperiodic communication method, event times indicate the reduction of communication frequency, and the positive minimal intervals implies the freeness of the Zeno behavior.
Conclusion
This paper presented a novel collective neurodynamic approach to find an optimal solution to the distributed resource allocation problem with non-smooth local objective functions and local convex sets. An aperiodic communication scheme based on an event-triggering rule was designed for the network of RNNs to alleviate communication loads. It was proved that RNNs’ outputs reached to the optimal resource allocation without any global information. Compared with most existing continuous-time collective neurodynamic approaches, our proposed model was with a simple structure and a resource-efficient communication scheme. In addition, the freeness of the Zeno behavior was guaranteed in the event-triggered communication scheme presented in this paper. Taken time-varying optimal functions into consideration, future work may study the collective neurodynamic approaches for the distributed time-varying resource allocation.
Availability of data and materials
The authors declare that the data supporting the findings of this study are available within the paper.
References
Huang X, Wu K, Jiang M, Huang L, Xu J (2021) Distributed resource allocation for general energy efficiency maximization in offshore miritime device-to-device communication. IEEE Wirel Commun Lett 10(6):1344–1348
Wang H, Li Y, Qian J (2020) Self-adaptive resource allocation in underwater acoustic interference channel: a reinforcement learning approach. IEEE Internet Things J 7(4):2816–2827
Yi P, Hong Y, Liu F (2016) Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems. Automatica 74:259–269
Wang D, Chen M, Wang W (2019) Distributed extremum seeking for optimal resource allocation and its application to economic dispatch in smart grids. IEEE Trans Neural Netw Learn Syst 30(10):3161–3171
Xu C, He X (2021) A fully distributed approach to optimal energy scheduling of users and generators considering a novel combined neurodynamic algorithm in smart grid. IEEE/CAA J Autom Sin 8(7):1325–1335
Low SH, Lapsley DE (1999) Optimization flow control. I. Basic algorithm and convergence. IEEE/ACM Trans Netw 7(6):861–874
Deng Z, Nian X, Hu C (2020) Distributed algorithm design for nonsmooth resource allocation problems. IEEE Trans Cybern 50(7):3208–3217
Zeng X, Yi P, Hong Y (2018) Distributed algorithm for robust resource allocation with polyhedral uncertain allocation parameters. J Syst Sci 31:103–119
Wang B, Fei Q, Wu Q (2021) Distributed time-varying resource allocation optimization based on finite-time consensus approach. IEEE Control Syst Lett 5(2):3004764
Wang B, Sun S, Ren W (2020) Distributed continuous-time algorithms for optimal resource allocation with time-varying quadratic cost functions. IEEE Trans Control Netw Syst 7(4):1974–1984
Ma L, Hu C, Yu J, Wang L, Jiang H (2023) Distributed fixed/preassigned- time optimization based on piecewise power-law design. IEEE Trans Cybern 53(7):4320–4333
Cai X, Zhong H, Li Y, Liao J, Nan X, Gao B (2023) Distributed extremum-seeking based resource allocation algorithm with input dead-zone. Int J Robust Nonlinear Control 33:3947–3960
Deng Z (2021) Distributed algorithm design for resource allocation problems of high-order multi-agent systems. IEEE Trans Control Netw Syst 8(1):177–186
Wang D, Wang Z, Wu Z, Wang W (2022) Distributed convex optimization for nonlinear multi-agent systems disturbed by a second-order stationary process over a digraph. Sci China Inf Sci 65:132201
Wang Z, Liu J, Wang D, Wang W (2021) Distributed cooperative optimization for multiple heterogeneous Euler–Lagrangian systems under global equality and inequality constraints. Inf Sci 577:449–466
Cai X, Nan X, Gao B (2023) A distributed extremum seeking based resource allocation algorithm over switching networks. Int J Robust Nonlinear Control 33:3790–3806
Tank DW, Hopfield JJ (1986) Simple ‘neural’ optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans Circuits Syst 33(5):533–541
Xia Y, Wang J (2004) A general projection neural network for solving monotone variational inequalities and related optimization problems. IEEE Trans Neural Netw 15(2):318–328
Liu Q, Wang J (2013) A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints. IEEE Trans Neural Netw Learn Syst 24(5):812–824
Liu Q, Yang S, Wang J (2017) A collective neurodynamic approach to distributed constrained optimization. IEEE Trans Neural Netw Learn Syst 28(8):1747–1758
Jia W, Qin S, Xue X (2019) A generalized neural network for distributed nonsmooth optimization with inequality constraint. Neural Netw 119:46–56
Jiang X, Qin S, Xue X (2020) A penalty-like neurodynamic approach to constrained nonsmooth distributed convex optimization. Neurocomputing 377:225–233
Wen X, Wang Y, Qin S (2021) A nonautonomous-differential-inclusion neurodynamic approach for nonsmooth distributed optimization on multi-agent systems. Neural Comput Appl 33:13909–13920
Wen X, Luan L, Qin S (2021) A continuous-time neurodynamic approach and its discretization for distributed convex optimization over multi-agent systems. Neural Netw 143:52–65
Ma L, Bian W (2021) A novel multiagent neurodynamic approach to constrained distributed convex optimization. IEEE Trans Cybern 51(3):1322–1333
Wei Z, Jia W, Bian W, Qin S (2023) A subgradient-based neural network to constrained distributed convex optimization. Neural Comput Appl 35:9961–9971
Wu W, Zhang Y, Zhang W, Xie W (2023) Output-feedback finite-time safety-critical coordinated control of path-guided marine surface vehicles based on neurodynamic optimization. IEEE Trans Syst Man Cybern Syst 53(3):1788–1800
He X, Fang X, Yu J (2019) Distributed energy management strategy for reaching cost-driven optimal operation integrated with wind forecasting in multimicrogrids system. IEEE Trans Syst Man Cybern Syst 48(8):1643–1651
He S, He X, Huang T (2021) A continuous-time consensus algorithm using neurodynamic system for distributed time-varying optimization with inequality constraints. J Franklin Inst 358:6741–6758
Jiang X, Qin S, Xue X, Liu X (2022) A second-order accelerated neurodynamic approach for distributed convex optimization. Neural Netw 146:161–173
Le X, Chen S, Yan Z, Xi J (2018) A neurodynamic approach to distributed optimization with globally coupled constraints. IEEE Trans Cybern 48(11):3149–3158
Dai MZ, Xiao F (2018) Edge-event- and self-triggered synchronization of coupled harmonic oscillators with quantization and time delays. Neurocomputing 310:172–182
Liu J, Zhang Y, Yu Y, Sun C (2020) Fixed-time leader-follower consensus of networked nonlinear systems via event/self-triggered control. IEEE Trans Neural Netw Learn Syst 31(11):5029–5037
Liu P, Wang J, Zeng Z (2023) Event-triggered synchronization of multiple fractional-order recurrent neural networks with time-varying delays. IEEE Trans Neural Netw Learn Syst 34(8):4620–4630
Wei Y, Shang C, Fang H, Zeng X, Dou L, Pardalos P (2022) Sloving a class of nonsmooth resource allocation problems with directed graphs through distributed Lipschitz continuous multi-proximal algorithms. Automatica 136:110071
Chen G, Yang Q, Song Y, Lewis F (2020) A distributed continuous-time algorithm for nonsmooth constrained optimization. IEEE Trans Autom Control 66(11):4914–4921
Le X, Chen S, Li F, Yan Z, Xi J (2019) Distributed neurodynamic optimization for energy internet management. IEEE Trans Syst Man Cybern Syst 49(8):1624–1633
Godsil C, Royle G (2001) Algebraic graph theory. Springer, New York
Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge
Khalil H (2002) Nonlinear systems, 3rd edn. Prentice-Hall, Hoboken
Cai X, Nan X, Gao B (2023) Distributed adaptive generalized Nash equilibrium seeking algorithm with event-triggered communication. Asian J Control 25:2239–2248
Binetti G, Davoudi A, Lewis FL, Naso D, Turchiano B (2014) Distributed consensus-based economic dispatch with transmission losses. IEEE Trans Power Syst 29(4):1711–1720
Luan L, Wen X, Qin S (2022) Distributed neurodynamic approaches to nonsmooth optimization problems with inequality and set constraint. Complex Intell Syst 8(6):5511–5530
Cai X, Nan X, Gao B, Yuan J (2023) Distributed event-triggered secondary control of microgrids with quantization communication. IEEE Trans Power Syst 38(5):4572–4581
Zhu HY, Li YX, Tong SC (2023) Dynamic event-triggered reinforcement learning control of stochastic nonlinear systems. IEEE Trans Fuzzy Syst 31(9):2917–2928
Seyboth GS, Dimarogonas DV, Johansson KH (2013) Event-based broadcasting for multi-agent average consensus. Automatica 49:245–252
Djordjevic V, Tao H, Song X, He S, Gao W (2023) Data-driven control of hydraulic servo actuator: an event-triggered adaptive dynamic programming approach. Math Biosci Eng 20(5):8561–8582
Song X, Wu N, Song S, Stojanovic V (2023) Switching-like event-triggered state estimation for reaction-diffusion neural networks against DoS attacks. Neural Process Lett 55:8997–9018
Song X, Sun P, Song S, Stojanovic V (2023) Finite-time adaptive neural resilient DSC for fractional-order nonlinear large-scale systems against sensor-actuator faults. Nonlinear Dyn 111:12181–12196
Song X, Song Y, Stojanovic V, Song S (2023) Improved dynamic event-triggered security control for T-S fuzzy LPV-PDE systems via pointwise measurements and point control. Int J Fuzzy Syst 25:3177–3192
He P, Wen J, Stojanovic V, Liu F, Luan X (2023) Finite-time control of discrete-time semi-Markov jump linear systems: a self-triggered MPC approach. J Franklin Inst 359(13):6939–6957
Xia Z, Liu Y, Wang J (2023) An event-triggered collaborative neurodynamic approach to distributed global optimization. Neural Netw 169:181–190
Chen Z, Wang J, Han QL (2023) A collaborative neurodynamic optimization approach to distributed chiller loading. IEEE Trans Neural Netw Learn Syst 1–10
Xia Z, Liu Y, Yu W, Wang J (2023) Collaborative Neurodynamic optimization approach to distributed Nash equilibrium seeking in multicluster games with nonconvex functions. IEEE Trans Cybern 1–11
Zhao Y, He X, Yu J, Huang T (2023) Distributed accelerated primal-dual neurodynamic approaches for resource allocation problem. Sci China Technol Sci 66:3639–3650
Luan L, Qin S (2023) Adaptive neurodynamic approach to multiple constrained distributed resource allocation. IEEE Trans Neural Netw Learn Syst 1–11
Acknowledgements
This work was sponsored by the National Natural Science Foundation of China (Grant No. 62303394), by the Fundamental Research Funds for Universities of Xinjiang Uygur Autonomous Region (Grant No. XJEDU2023P025) and by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (Grant No. 2022D01C694).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cai, X., Gao, B. & Nan, X. A collective neurodynamic approach to distributed resource allocation with event-triggered communication. Complex Intell. Syst. (2024). https://doi.org/10.1007/s40747-024-01436-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40747-024-01436-w