Introduction

With wide applications in science and engineering, such as communication networks [1, 2], smart grids [3,4,5], and traffic networks [6], the optimal resource allocation has attracted much attention. Distributed resource allocation, as a class of constrained optimization problems, is to find an optimal solution that minimizes a global function subject to both global equality constraints and local convex sets in a distributed manner. In the framework of multi-agent systems, multiple agents, who get access to local objective functions and constraints, and cooperate to seek the optimal solution to the resource allocation problem by distributed algorithms.

Many studies on the design of distributed algorithms have been reported to find a solution to the optimal resource allocation problem (see [3, 7,8,9,10,11,12] and the references therein). A distributed gradient algorithm with the help of the projection dynamics to handle local convex sets was designed for multi-agent systems with smooth objective functions [3]. The related results were also extended to non-smooth functions [7]. A robust distributed algorithm was developed for the distributed resource allocation with uncertain allocation parameters [8]. For time-varying objective functions, distributed algorithms based on the prediction-correction method and a non-smooth consensus estimator were presented to obtain the time-varying optimal allocation [9, 10]. Taking agents with inherent dynamics into consideration, some distributed algorithms for resource allocation were proposed for double-integrator and multi-integrator systems to obtain the optimal allocation [7, 13,14,15,16].

In the last decades, numerous researchers have investigated continuous-time recurrent neural networks (RNNs) to solve various optimization problems (see [17,18,19, 43, 49] and the references therein), since that continuous-time RNNs can be easily implemented by analog circuits. In virtue of the development of multi-agent systems, multiple RNNs have been presented to cooperatively handle distributed optimization problems in recent years. A collective neurodynamic approach was presented to find a solution to a distributed constrained optimization problem [20]. From then on, some different neurodynamic approaches have been designed to resolve the distributed (constrained) optimization with non-smooth optimization functions [21,22,23,24,25,26,27,28, 30]. Resorting to the consensus protocol, a neurodynamic approach was presented to track the time-varying solution of a distributed time-varying optimization problem subject to inequality constraints [29]. To distribute chiller loading with nonconvex power consumption functions, a collaborative neurodynamic approach was proposed in [53]. For multicluster games with nonconvex functions, a collaborative neurodynamic approach was designed to seek Nash equilibrium [54]. For the distributed resource allocation problem that this paper focuses on, a neurodynamic approach based on the projected dynamics was designed in [31]. Based on the primal-dual framework, distributed accelerated neurodynamic approaches were proposed for the resource allocation problem [55]. To avoid the disclosure of global information, an adaptive neurodynamic approach was presented to solve distributed resource allocation problems [56]. Note that the above-mentioned literatures only focus on the design and convergence performance of neurodynamic approaches rather than the difficulty in the implementation of continuous-time communication in the designed collaborative neurodynamic approaches.

In general, the collective continuous-time RNNs can be regarded as a dynamical system. From the point of control theory, continuous-time communications, required in the collective continuous-time RNNs, may lead to high and redundant energy consumptions. Different from the periodic sampling communication scheme, event-triggered communication schemes [32,33,34, 51], as an energy-efficient strategy, have been extensively studied in multi-agent systems for past decades. In recent years, event-triggered schemes have been further applied in microgrids [44], servo systems [47], fuzzy systems [50], reinforcement learning [45], and neural networks [34, 48]. It is known that the times when agents (RNNs) communicate with their neighbors, that is, the times when agents transmit and/or receive information are determined by a predefined triggering condition, called event-triggered communication schemes. Thus, under the condition that the performance of the multi-agent systems is ensured, event-triggered schemes can efficiently reduce superfluous utilization of computation resources and communication bandwidths. Recently, for the synchronization of multiple fractional-order RNNs, an event-triggered communication scheme was presented to alleviate communication burden in a network of RNNs [34]. An event-triggered collaborative neurodynamic approach was presented to deal with distributed global optimization [52]. However, the existing work on the collective RNNs for distributed resource allocation does not consider event-triggered communication schemes.

Motivated by the nonsmoothness, as a natural property, in resource allocation problems, including multi-robot coordination problem [35] and piecewise cost function in economic dispatch of smart grids [36], a collaborative neurodynamic approach is studied in this paper to find a solution to distributed resource allocation problem with non-smooth objective functions. In consideration of the application potential of RNNs for distributed resource allocation and the case of the communication network with limited resources, it is practically important to further explore the collaborative neurodynamic approach for distributed resource allocation by using event-triggered communication schemes. In this paper, a collaborative neurodynamic approach is presented to cope with a distributed optimal resource allocation problem in which local objective functions are non-smooth and are subject to local convex sets. In summary, the main contributions are listed below.

  1. 1.

    In comparison to the existing collaborative neurodynamic approaches for distributed resource allocation [31, 37, 55, 56], our proposed model of the neural network has a simple structure with less internal states and interactive information for RNNs with non-smooth objective functions.

  2. 2.

    An aperiodic communication scheme called the event-triggered scheme, is designed in the collective neurodynamic model to reduce the communication loads among RNNs and to get rid of the continuous-time communications required in [28, 31, 37, 55]. Due to discrete-time interactive information used in the collective neurodynamic model, the convergence analysis of the collective neurodynamic model becomes more difficult than that in [28, 31, 37, 55].

  3. 3.

    Different from distributed algorithms based on the primal-dual dynamics [3, 7,8,9,10, 28, 37, 55], our proposed neurodynamic approach is designed on the basis of the KKT condition and a consensus protocol. Moreover, the convergence of the presented approach is not established on the global information, which is required in [3, 7,8,9,10].

The rest of this paper is arranged as follows. The formulated problem is given in the next section. The model of RNNs is constructed in the following section. Next section gives the analyses of the optimality and convergence of the proposed neurodynamic approach. Following section shows numerical examples. Last section gives the conclusion and the future work.

Notations: \(\mathbb {R}\) is the real number set. The space of n-dimensional real vectors is denoted by \(\mathbb {R}^n\). The set of \(n \times m\) real matrices is defined by \(\mathbb {R}^{n\times m}\). The Euclidean norm of a vector \(x\in \mathbb {R}^n\) is \(\Vert x\Vert \). The spectral norm and the transpose of matrix \(A\in \mathbb {R}^{n\times n}\) are denoted by \(\Vert A\Vert \) and \(A^T\), respectively. \(A\otimes B\) denotes the Kronecker product of matrices A and B. Let \(\textrm{col}(x_1,\ldots ,x_n)=[x_1^T,\ldots ,x_n^T]^T\). An n-dimensional column vectors with all elements being 1 (0) is denoted by \(1_n\) (\(0_n\)). For a non-smooth function \(f(x):\mathbb {R}^n \rightarrow \mathbb {R}\), \(\partial f(x) \in \mathbb {R}^n\) denoted the subdifferential of function f(x) at x.

Problem formulation

In this section, a collective neurodynamic network with N RNNs is considered to solve a non-smooth resource allocation problem, given by

$$\begin{aligned} \begin{array}{l} \min f(x)=\sum _{i=1}^Nf_i(x_i)\\ \mathrm{s.t.} \ \ \sum _{i=1}^Na_ix_i=\sum _{i=1}^Nb_i\\ \ \ \ \ \ \ \ x_i\in \Omega _i, i\in \{1,\ldots ,N\}. \end{array} \end{aligned}$$
(1)

In (1), \(x_i\in \mathbb {R}^n\) is the decision (output) of the i-th RNN, \(f_i(x_i):\mathbb {R}^n\rightarrow \mathbb {R}\) is the ith RNN’s objective function, \(a_i>0\), \(b_i\in \mathbb {R}^n\), and \(\Omega _i\in \mathbb {R}^n\) is a convex set which is nonempty and closed. Let \(\Omega =\Omega _1\times \cdots \times \Omega _N\). Assume that the communication network of the N RNNs is described by a directed graph (digraph) \(\mathcal {G}=(\mathcal {I},\mathcal {E})\). RNNs correspond to the nodes of graph \(\mathcal {G}\), denoted by set \(\mathcal {I}=\{1,\ldots ,N\}\). Communication links among RNNs are described by edges of graph \(\mathcal {G}\), denoted by set \(\mathcal {E}\). For example, if RNN j can receive the information from RNN i, there is an edge (ij) in the set \(\mathcal {E}\). There is a directed spanning tree in graph \(\mathcal {G}\) if one node is the root of directed paths to every other node. Define \(\mathcal {A}=[w_{ij}]\in \mathbb {R}^{N\times N}\) as the adjacency matrix of digraph \(\mathcal {G}\), where \(w_{ij}>0\) is the weight on link (ij). \(D_\textrm{out}=\)diag\(\{d_\textrm{out}^1,\ldots ,d_\textrm{out}^N\}\) is the out-degree matrix, where \(d_\textrm{out}^i=\sum _{j\ne i}w_{ij}, \forall i \in \mathcal {I}\). \(L=D_\textrm{out}-\mathcal {A}\in \mathbb {R}^{N\times N}\) is the Laplacian matrix of graph \(\mathcal {G}\). A digraph is strongly connected and weighted-balanced if and only if \(1_N^TL=0_N^T\) and \(L+L^T\) is positive semidefinite [38].

Remark 1

In the economic dispatch of smart grids, the generation cost of the ith generation system is described by \(f_i\), and \(\Omega _i\) characterizes the limited capacity of power generation of the ith system. The global equality constraint \(\sum _{i=1}^Na_ix_i=\sum _{i=1}^Nb_i\) expresses the balance of supply and demand, in which \(a_i\) denotes the proportion of the power generated by the ith generation unit, and \(b_i\) represents the demand of the ith load.

The following assumptions are widely given in [3, 7,8,9,10] and are considered throughout this paper for the design and analysis of the neural network.

Assumption 1

The function \(f_i\) is convex in \(x_i\in \Omega _i\), for all \(i\in \mathcal {I}\).

Assumption 2

There is an optimal solution \(x^*\in \Omega \) of problem (1).

Assumption 3

The weight-balanced communication graph \(\mathcal {G}\) has a spanning tree.

Remark 2

Although many neurodynamic approaches (see [21,22,23,24,25,26] and the references therein) have been presented to find a solution to the distributed optimization subject to both equality constraints and local convex sets, the ith RNN underlying these approaches obtains the global optimal solution \(x^*\) to problem (1) rather than local optimal solution \(x_i^*\), \(i\in \mathcal {I}\), in the global sense. Then, compared with these approaches for problem (1), our proposed neurodynamic approach has a simple structure with low dimension.

Fig. 1
figure 1

The structure of the ith RNN for distributed resource allocation (1)

Model of neurodynamic approach

In this section, inspired by [5], a network of RNNs is modeled to cooperatively find an optimal solution to the resource allocation problem (1). For the ith RNN, its local information includes objective function \(f_i(x_i)\), local convex set \(\Omega _i\), \(a_i\), and \(d_i\) in constraint \(\sum _{i=1}^Na_ix_i=\sum _{i=1}^Nb_i\). Define \(x_i\in \Omega _i\) as the output of the ith RNN. \(x=\)col\((x_1,\ldots ,x_N)\) is an estimation of the optimal solution of problem (1). The dynamic equation of the ith RNN is modeled by

(1) State equation

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}y_i}{\textrm{d}t}&=\sum _{j=1}^N w_{ij}(\hat{\lambda }_i-\hat{\lambda }_j),\\ \frac{\textrm{d}\lambda _i}{\textrm{d}t}&=-\sigma \lambda _i-a_i^{-1}\gamma _i, \end{aligned} \end{aligned}$$
(2)

(2) Output equation

$$\begin{aligned} x_i=P_{\Omega _i}(y_i), \end{aligned}$$
(3)

where \(\sigma >0\), \(w_{ij}\) is the weight on communication link \((i,j)\in \mathcal {E}\) between the ith RNN and the jth RNN, and \(\gamma _i\in \partial f_i(x_i)\) is the subgradient of \(f_i\) at \(x_i\). \(P_{\Omega _i}(y_i)\) is projection output of the ith RNN and is defined by \(P_{\Omega _i}(y_i)=\inf _{z_i\in \Omega _i}\Vert z_i-y_i\Vert \). \(\hat{\lambda }_i=\lambda _i(t_{k_i}^i)\) denotes the information of the i-th RNN broadcasted to the network at time \(t_{k_i}^i, k_i=1,2,\ldots \). Then, time \(t_{k_i+1}^i\) is decided by

$$\begin{aligned} t_{k_i+1}^i=\inf \left\{ t>t_{k_i}^i \mid \Vert \hat{\lambda }_i-\lambda _i(t)\Vert ^2\ge \zeta _{i1}e^{-\zeta _{i2}t}\right\} . \end{aligned}$$
(4)

In (4), \(\zeta _{i1}>0\) and \(0<\zeta _{i2}<\varrho _2(L)\), where \(\varrho _2(L)\) is the second smallest eigenvalue of L. \(\Vert \hat{\lambda }_i-\lambda _i(t)\Vert \) denotes the measurement error. Once the measurement error is grater than the threshold value \(\zeta _{i1}e^{-\zeta _{i2}t}\), the trigger transmits the latest information \(\lambda _i(t_{k_i+1}^i)\) to the network.

Figure 1 shows the structure of the ith RNN, \(i\in \mathcal {I}\), modeled by (2)–(3). It is seen from Fig. 1 that the ith RNN updates its state only by local information containing its own information and neighboring RNNs’ transmitting information. Moreover, the interchanging information among RNNs is state \(\lambda _i\), \(i\in \mathcal {I}\), rather than decision \(x_i\), \(i\in \mathcal {I}\), thus, the privacy of each RNN’s decision is protected. In addition, since the proposed model of RNNs with a simple structure can be implemented by programmable circuits with the composition of amplifiers and operators of addition, projection and integration, the designed neurodynamic approach can be applied to handle the distributed resource allocation in the large-scale systems. Let \(y=\)col\((y_1,\ldots ,y_N)\), \(\lambda =\)col\((\lambda _1,\ldots , \lambda _N)\), \(\hat{\lambda }=\)col\((\hat{\lambda }_1,\ldots ,\hat{\lambda }_N)\), \(a=\)diag\(\{a_1,\cdots ,a_N\}\otimes I_n\), \(\gamma =\)col\((\gamma _1,\ldots ,\gamma _N)\), and \(P_{\Omega }(y)=\)col\((P_{\Omega _1}(y_1),\ldots ,P_{\Omega _N}(y_N))\). Thus, the model of the collective neurodynamic network is given by

(1) State equation

$$\begin{aligned} \begin{aligned} \frac{\textrm{d}y}{\textrm{d}t}&=(L\otimes I_n)\hat{\lambda },\\ \frac{\textrm{d}\lambda }{\textrm{d}t}&=-\sigma \lambda -a^{-1}\gamma , \end{aligned} \end{aligned}$$
(5)

(2) Output equation

$$\begin{aligned} x=P_{\Omega }(y). \end{aligned}$$
(6)

Theoretical analysis

In this section, to ensure that the outputs of RNNs reach the optimal solution \(x^*\) to problem (1), there are two issues to be tackled: (1) the relationship between the outputs of RNNs and the optimal solution \(x^*\); (2) the convergence of the outputs of RNNs to the optimal solution \(x^*\). In what follows, the two issues will be discussed in detail.

Optimality analysis

Under Assumption 1, problem (1) is a constrained convex optimization problem. The optimality condition is given in Lemma 1.

Lemma 1

Under Assumptions 12, \(x^*\) is an optimal solution to problem (1), if and only if

$$\begin{aligned} a_i^{-1}\gamma _i^*=a_j^{-1}\gamma _j^*, \end{aligned}$$
(7)

where \(\gamma _i^*\in \partial f_i(x_i^*)\), \(\forall i\in \mathcal {I}\).

The conclusion is directly obtained by Karush-Kuhn-Tucker (KKT) conditions [39]. In the power market, (7) indicates the electricity price at the optimal power allocation \(x^*\). The relationship between the outputs of RNNs and optimal solution \(x^*\) is analyzed in Lemma 2.

Lemma 2

Under Assumptions 13, \(x^*\in \Omega \) is an optimal solution to problem (1) if and only if there exists an equilibrium point \((y^*,\lambda ^*)\) of system (5) with output \(x^*=P_{\Omega }(y^*)\), where \(y^*\in \mathbb {R}^{Nn}\) and \(\lambda ^*=\bar{\lambda }1_N\) for some \(\bar{\lambda }\in \mathbb {R}^n\).

Proof

  1. (1)

    Necessary: Assume that \((y^*,\lambda ^*)\) is an equilibrium point of system (5), satisfying

    $$\begin{aligned} \begin{aligned} 0_{Nn}&=(L\otimes I_n)\lambda ^*,\\ 0_{Nn}&=-\sigma \lambda ^*-a^{-1}\gamma ^*. \end{aligned} \end{aligned}$$
    (8)

    Under Assumption 3, it follows from the first equation in (8) that \(\lambda _i^*=\lambda _j^*=\bar{\lambda }\) for some \(\bar{\lambda }\in \mathbb {R}^n\). Under Assumptions 1 and 2, it further from the second equation of (8) yields that \(a_i^{-1}\gamma _i^*=a_j^{-1}\gamma _j^*\) with \(\gamma _i^*\in \partial f_i(x_i^*)\) and \(\gamma _j^*\in \partial f_j(x_j^*)\), which satisfies the optimal condition (7) in Lemma 1. Therefore, \(x^*=P_{\Omega }(y^*)\) is an optimal solution to problem (1).

  2. (2)

    Sufficiency: Assume that \(x^*\) is an optimal solution to problem (1) by Assumption 2. Under Assumption 1, define \(\lambda _i=a_i^{-1}\gamma _i\) with \(\gamma _i\in \partial f_i(x_i)\). According to Lemma 1, it yields that \(\lambda _i^*=\lambda _j^*\), for \(i,j\in \{1,\ldots ,N\}\), which satisfies the first equation in (8) by Assumption 3. Since \(x=P_{\Omega }(y)\) is absolutely continuous, it follows that \(x(t)-x(0)=P_{\Omega }(y(t))-P_{\Omega }(y(0))\). If y(t) is a complete solution of (5) evolving on the largest invariant set \(\mathcal {M}=\{y^*,\lambda ^*\}\), \(\dot{y}=0\) holds for almost all \(t\ge 0\). Since \(y(t)=y^*\) by invariance, \(x^*=P_{\Omega }(y^*)\) satisfies the second equation in (8). Therefore, \((y^*,\lambda ^*)\) is an equilibrium point of system (5).

\(\square \)

It is known from Lemma 2 that output x(t) of RNNs can arrive at \(x^*\), if state y(t) of (5)–(6) converges to its equilibrium point \(y^*\).

Convergence analysis

At first, the convergence of state \((y,\lambda )\) and output x of the neural network is analyzed. Then, the analysis of the Zeno behavior in event-triggering rule (4) is given.

Theorem 3

Suppose that Assumptions 13 hold. If the initial condition satisfies \((y(0),\lambda (0))\in \{(y,\lambda )\in \mathbb {R}^{2Nn} \mid \sum _{i=1}^N a_iy_i(0)=\sum _{i=1}^Nb_i\}\), state \((y(t),\lambda (t))\) of system (5) converges to \((y^*,\lambda ^*)\) and output x(t) reaches to \(x^*\).

Proof

Define the measurement error as \(e_i=\hat{\lambda }_i-\lambda _i\) for \(i\in \{1,\ldots ,N\}\). Denote \(e=\)col\((e_1,\ldots , e_N)\). Thus, the model of the collective neurodynamic network is rewritten by

  1. (1)

    State equation

    $$\begin{aligned} \begin{aligned} \frac{\textrm{d}y}{\textrm{d}t}&=(L\otimes I_n)\lambda +(L\otimes I_n)e,\\ \frac{\textrm{d}\lambda }{\textrm{d}t}&\in -\sigma \lambda -a^{-1}\partial f(x), \end{aligned} \end{aligned}$$
    (9)
  2. (2)

    Output equation

    $$\begin{aligned} x=P_{\Omega }(y). \end{aligned}$$
    (10)

\(\square \)

Regard (9)–(10) as a dynamic system, where y and \(\lambda \) are states, and x is the output of the system. Then, the convergence of the collective neurodynamic network’s output can be analyzed by the stability of the dynamic system via the theory of nonlinear dynamical systems. Consider a Lyapunov function \(V=\frac{1}{2}\lambda ^T(L^TL\otimes I_n)\lambda +E(y)\), where \(E(y):\mathbb {R}^{Nn}\rightarrow \mathbb {R}\) is defined as \(\partial E(y)=a^{-1}(L\otimes I_n)\partial f(P_{\Omega }(y))\) with \(\partial f_N(P_{\Omega }(y))=\)col\((\partial f_1(P_{\Omega _1}(y_1)),\ldots , f_N(P_{\Omega _N}(y_N)))\). Since matrix \(L^TL\) is symmetric and semidefinite. It yields that

$$\begin{aligned} \begin{aligned} \dot{V}&\le \sup _{\gamma \in \partial f(x)} (-\sigma \lambda ^T(L^TL\otimes I_n)\lambda )\\&\quad +((L\otimes I_n)\gamma )^T(L\otimes I_n)e. \end{aligned} \end{aligned}$$
(11)

Since \(\gamma \) is bounded, it follows from event-triggering rule (4) that

$$\begin{aligned} \dot{V}\le -\sigma \lambda ^T(L^TL\otimes I_n)\lambda + \Vert \gamma ^T(L^TL\otimes I_n)\Vert \zeta _1 e^{-\zeta _2t},\nonumber \\ \end{aligned}$$
(12)

where \(\zeta _1=\max \{\zeta _{11},\ldots ,\zeta _{N1}\}\) and \(\zeta _2=\min \{\zeta _{12},\ldots ,\zeta _{N2}\}\). It follows from (12) that V(t) decreases as \(t\rightarrow \infty \), i.e., \(V(t)\le V(0)\). Thus, \(\frac{1}{2}\Vert (L\otimes I_n)\lambda (t)\Vert ^2\le V(0)-E(y)\), which indicates that \(\Vert (L\otimes I_n)\lambda \Vert \) is bounded. Since \(\gamma \in \partial f(x)\) is bounded, it is derived from the second equation in (9) that

$$\begin{aligned} \begin{aligned} \lambda (t)&=e^{-\sigma t}\lambda (0)-\int _{0}^t e^{-\sigma (t-\tau )}a^{-1}\gamma \textrm{d}\tau \\&\le e^{-\sigma t}\lambda (0)+\int _{0}^t e^{-\sigma (t-\tau )}\Vert a^{-1}\gamma \Vert \textrm{d}\tau \\&\le e^{-\sigma t}\lambda (0)+\frac{\Vert a^{-1}\gamma \Vert }{\sigma }(1-e^{-\sigma t}). \end{aligned} \end{aligned}$$
(13)

Since \(\lambda (t)\) is bounded, it follows from the second equation in (9) that \(\dot{\lambda }\) is bounded, which further yields that \((L\otimes I_n)\dot{\lambda }\) is bounded. By the Barbalat’s lemma in [40] that \(\lim _{t\rightarrow \infty } (L\otimes I_n)\lambda (t)=0_{Nn}\), which indicates that \(\lim _{t\rightarrow \infty } \lambda _i(t)=\lambda _j(t)=\bar{\lambda }\), for some \(\bar{\lambda }\in \mathbb {R}^n\). From \(\lim _{t\rightarrow \infty } (L\otimes I_n)\lambda (t)=0_{Nn}\), we know that \(\dot{y}\) is bounded. It is derived that \(y(t)=\int _{0}^t (L\otimes I_n)\lambda (\tau )\textrm{d}\tau \). Thus, we have \(\lim _{t\rightarrow \infty } y(t)=y^*\). Therefore, as \(t\rightarrow \infty \), \((y,\lambda )\) asymptotically converges to \((y^*,\lambda ^*)\). Since y(t) converges to the equilibrium point \(y^*\) such that \(a_i^{-1}\gamma _i^*=a_j^{-1}\gamma _j^*\) holds. According to Lemma 2, x(t) asymptotically converges to \(x^*=P_{\Omega }(y^*)\), which is the optimal solution to problem (1). \(\blacksquare \)

Theorem 4

For the proposed neurodynamic model (2)–(3), the freeness of the Zeno behavior is ensured in event-triggered rule (4).

Proof

Similar to [41], the Zeno behavior is analyzed by the positive inter-event intervals. For the ith RNN, the dynamics of measurement error \(e_i\) is \(\dot{e}_i=\dot{\lambda }_i\). It yields that

$$\begin{aligned} \frac{d\Vert e_i\Vert }{dt} \le \Vert \frac{de_i}{dt}\Vert \le \sigma \Vert \lambda _i\Vert +\Vert a_i^{-1}\gamma _i\Vert , \end{aligned}$$
(14)

where \(\gamma _i\in \partial f_i(x_i)\). According to the proof of Theorem 3, the trajectories \(\lambda _i(t)\), \(i\in \mathcal {I}\), of system (5) are bounded. Then, there is a constant \(Q>0\) such that \(\frac{\textrm{d}\Vert e_i\Vert }{\textrm{d}t} \le Q\). Let \(t_{k_i}^i\) be the latest triggering time. Then, the upper bound of \(\frac{\textrm{d}\Vert e_i\Vert }{\textrm{d}t}\) is \(\frac{\textrm{d}\Vert e_i\Vert }{\textrm{d}t}\Vert \le Qt_{k_i}^i\). Thus, It yields that \(\Vert e_i(t)\Vert \le Q(t-t_{k_i}^i)\). The next event will be triggered when \(\Vert e_i\Vert =\zeta _{i1}e^{-\zeta _{i2}t}\). Thus, the inter-event interval between two consecutive events is denoted by \(T_k^i=t_{k_i+1}^i-t_{k_i}^i\), which is derived by \(\zeta _{i1}e^{-\zeta _{i2}t_{k_i+1}^i}\le QT_k^i\). Since \(\zeta _{i1}e^{-\zeta _{i2}t}>0\) for any \(t\ge 0\), it yields that \(T_{k}^i>0\) for any given k, and for \(i\in \{1,\ldots ,N\}\). Therefore, the Zeno behavior is excluded in the event-triggered rule (4). \(\square \)

Discussions

In this section, the influence of parameters on the convergence performance is discussed in Remark 3. The comparisons with the related works are discussed in Remarks 47, respectively. The computational complexity of the designed neurodynamic approach is analyzed in Remark 8.

Remark 3

According to the proof of Theorem 3, parameter \(\sigma \) has impacts on consensus convergence rate and the steady error. Grate \(\sigma \) can increase the convergence rate and decrease the steady error. In event-triggered rule (4), parameters \(\zeta _{i1}\) and \(\zeta _{i2}\) affect the communication frequency. For fixed \(\zeta _{i2}\), small \(\zeta _{i1}\) can shorten the inter-event intervals [46].

Remark 4

Although this paper and [5] studied the distributed resource allocation problem, there are three main differences between model (2)–(3) and the method presented in [5]. In detail, the first one is that a more general problem (1) with local convex sets is studied in this paper. The studied problem in [5] is a special case of problem (1) in which \(a_i=1, \forall i\in \mathcal {I}\). The second one is the method to handle local convex sets. Different from the exact penalty method used in [5], an internal dynamics with projection output is proposed in this paper. The third one is the communication scheme. An aperiodic communication scheme called the event-triggered scheme, is designed in RNNs’ model (2)–(3) for the reduction of communication loads incurring by the continuous-time communication required in [5].

Remark 5

Compared with the neurodynamic approaches designed in [31, 37, 55, 56] to solve the distributed resource allocation, the model of RNNs presented in this paper has a more simple structure with less internal variables. Moreover, the assumption that a feasible solution exists in the interior of \(\Omega \) utilized in [31, 37, 55, 56] is relaxed. In contrast to the continuous-time communication, as a requirement for the implementation of the neurodynamic approaches in [31, 37, 55], an asynchronous discrete-time communication scheme based on the event-triggered scheme, is designed in the neurodynamic approach (2)–(3) to reduce communication loads. Different from the event-triggering rule [56] depending on neighboring RNNs’ triggered states, the event-triggered scheme (4) is only determined by each RNN’s own state such that there is no need to monitor neighbors’ triggering information in real time.

Remark 6

Similar event-triggered communication schemes are designed in [34] and this paper. In contrast to the synchronization of recurrent neural networks studied in [34], a class of distributed constrained optimization problem with non-smooth objective functions subject to global constraints and local convex sets is considered in this paper. An explicit event-triggering function is given in (4). Different selections of parameters \(\zeta _{i1}\) and \(\zeta _{i2}\) may affect the inter-event intervals, and thus, adjust the communication frequency.

Remark 7

Although this paper and the related work [3, 7,8,9,10, 28, 37] study the distributed resource allocation problem, the significant advantage of our proposed neurodynamic approach is the simple structure with less internal states and interactive information. Moreover, the proposed neurodynamic approach is fully distributed, that is, the convergence of RNNs’ outputs does not rely on global information, such as the knowledge of the communication network.

Remark 8

The solution of the non-smooth resource allocation problem (1) exits. Thus, from the perspective of computational complexity, the designed neurodynamic approach (2)–(3) is polynomial-order complexity. From the point of control systems, the designed neurodynamic approach (2)–(3) has the ability to tolerant the parameter uncertainty. The robustness of the designed neurodynamic approach (2)–(3) to the model parameter uncertainty can be analyzed in our subsequent studies.

Numerical simulations

In this section, two examples are given to verify the proposed neurodynamic approach.

Fig. 2
figure 2

aSimplified illustration of the smart grid. b Communication topology

Example 1

The distributed economic dispatch in smart grids is considered here. Figure 2a shows a smart grid composed of four buses. The four buses exchange information on a graph \(\mathcal {G}\) given in Fig. 2b. Each bus has a cost function [7, 42], given by

$$\begin{aligned} f_i(x_i)=\alpha _i+\beta _i\mid x_i-c_i\mid +\kappa _ix_i^2, \ \ i\in \{1,2,3,4\}. \end{aligned}$$
(15)

The economic dispatch problem is

$$\begin{aligned} \begin{aligned}&\min \sum _{i=1}^4 f_i(x_i)\\&s.t. \ \sum _{i=1}^4x_i=140, \ x_i\in \Omega _i, \end{aligned} \end{aligned}$$

where \(\Omega _i\) denotes local generation capacity of the i-th generator. Parameters \(\alpha _i,\beta _i,c_i,\kappa _i\) and \(\Omega _i\), \(i\in \{1,2,3,4\}\) are given in Table 1. According to Lemma 1, the optimal solution exists and is \(x^*=[22.1,31.7,68.4,17.8]^T\). It follows from Theorem 3 that the outputs of RNNs can reach to the optimal solution \(x^*\).

Table 1 System parameters
Fig. 3
figure 3

The outputs of the four RNNs

Fig. 4
figure 4

The state \(\lambda (t)\) of the four RNNs

Fig. 5
figure 5

The event-triggered time sequences of the four RNNs

Fig. 6
figure 6

The number of communication between the periodic and event-triggered communications

Fig. 7
figure 7

The outputs of the four RNNs under continuous-time and event-triggered communications

The initial conditions are \(x(0)=y(0)=[30, 35, 55, 20]^T\) and \(\lambda (0)=[0,0,0,0]^T\). In dynamics of RNNs (2), select \(\sigma =1\). For the event-triggered communication scheme (4), choose \([\zeta _{11},\zeta _{21},\zeta _{31},\zeta _{41}]^T=[0.8,0.5,0.5,0.8]^T\) and \([\zeta _{12},\zeta _{22},\zeta _{32},\zeta _{42}]^T=[0.1,0.15,0.15,0.25]^T\). The four RNNs’ outputs obtained by (2) are shown in Fig. 3, where the solid lines are the four RNNs’ outputs and the dotted lines are the optimal decisions. It indicates that four RNNs’ outputs reach to the optimal solution \(x^*\). In Fig. 4, the state \(\lambda _i(t)\) of the ith RNN, \(i=\{1,2,3,4\}\), updated by (2), becomes consensus, which implies that (7) holds at the steady state. The event-triggered time sequences of the four RNNs are given in Fig. 5, from which the inter-event intervals are positive. It further indicates the freeness of the Zeno behavior in the designed event-triggering rule (4), which validates the results given in Theorem 4. Compared with the periodic communication with \(T=0.1s\), the number of communication between the periodic and event-triggered communications is shown in Fig. 6, from which event-triggered scheme (4) effectively reduces the number of communication such that the communication load is alleviated. To show that the event-triggered communication has little influence on the convergence performance, Fig. 7 depicts the outputs of the four RNNs under continuous-time and event-triggered communication, respectively. It is seen from Fig. 7 that the convergence performance of neurodynamic model (2)–(3) with event-triggered communication scheme (4) is similar to that of the model with continuous-time communication.

Example 2

As an illustration, the economic dispatch of the IEEE 30-bus system [3] is considered. The cost function of each power generation is the same as (15). Table 2 shows the parameters in cost functions. Figure 8 depicts the communication graph of the IEEE 30-bus system. The balance between supply and demand is described by \(\sum _{i=1}^6x_i=290\). The initial conditions are \(x(0)=y(0)=[126,78,48,12,12,14]^T\) and \(\lambda (0)=[0,0,0,0,0,0]^T\). According to Lemma 1, the optimal solution of problem (1) exists and is \(x^*=[174.9,44.67,18.52,10.49,10.49,31.35]^T\). Set \(\sigma =0.5\), \(\zeta _{i1}=0.1\), and \(\zeta _{i2}=0.01\), for all \(i=\{1,\ldots ,6\}\) in (2) and (4), respectively. The outputs of the six RNNs, regulated by (2), are shown in Fig. 9a, which indicates that the proposed neurodynamic approach finds the optimal solution \(x^*\). The results given in Theorem 3 is validated by Fig. 9. The effect of different \(\sigma \) on the convergence rate of the six RNNs’ outputs is shown in Fig. 9, from which the six RNNs’ outputs reach to \(x^*\) before \(t=100s\) when \(\sigma =0.3\), and for \(\sigma =0.1\) the six RNNs’ outputs reach to \(x^*\) after \(t=100s\). It indicates that the convergence rate becomes lower when parameter \(\sigma \) is smaller. The analysis of the inter-event intervals of the six RNNs is shown in Table 3, from which the event-triggered scheme is an aperiodic communication method, event times indicate the reduction of communication frequency, and the positive minimal intervals implies the freeness of the Zeno behavior.

Fig. 8
figure 8

The communication graph for the IEEE 30-bus system

Table 2 Parameters of generations in IEEE 30-bus system
Fig. 9
figure 9

The outputs of the six RNNs with different values of \(\sigma \)

Table 3 Analysis of event-triggered intervals for the six generators

Conclusion

This paper presented a novel collective neurodynamic approach to find an optimal solution to the distributed resource allocation problem with non-smooth local objective functions and local convex sets. An aperiodic communication scheme based on an event-triggering rule was designed for the network of RNNs to alleviate communication loads. It was proved that RNNs’ outputs reached to the optimal resource allocation without any global information. Compared with most existing continuous-time collective neurodynamic approaches, our proposed model was with a simple structure and a resource-efficient communication scheme. In addition, the freeness of the Zeno behavior was guaranteed in the event-triggered communication scheme presented in this paper. Taken time-varying optimal functions into consideration, future work may study the collective neurodynamic approaches for the distributed time-varying resource allocation.