Skip to content
BY 4.0 license Open Access Published by De Gruyter November 13, 2020

Stability analysis of almost periodic solutions for discontinuous bidirectional associative memory (BAM) neural networks with discrete and distributed delays

  • Weijun Xie , Fanchao Kong EMAIL logo , Hongjun Qiu and Xiangying Fu

Abstract

This paper aims to discuss a class of discontinuous bidirectional associative memory (BAM) neural networks with discrete and distributed delays. By using the set-valued map, differential inclusions theory and fundamental solution matrix, the existence of almost-periodic solutions for the addressed neural network model is firstly discussed under some new conditions. Subsequently, based on the non-smooth analysis theory with Lyapunov-like strategy, the global exponential stability result of the almost-periodic solution for the proposed neural network system is also established without using any additional conditions. The results achieved in the paper extend some previous works on BAM neural networks to the discontinuous case and it is worth mentioning that it is the first time to investigate the almost-periodic dynamic behavior for the BAM neural networks like the form in this paper. Finally, in order to demonstrate the effectiveness of the theoretical schemes, simulation results of two topical numerical examples are delineated.

1 Introduction

1.1 Previous works

Bidirectional associative memory (BAM) neural networks introduced by Kosko in [1], [2] have been widely investigated due to their extensive applications in signal processing, image processing, pattern recognition, and so on. During the past sever decades, many studies have investigated the dynamical behaviors of delayed BAM neural networks. See [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16] and the related references therein. For example, on the basis of the M-matrix approach, the homeomorphism property and by constructing a novel Lyapunov functional, Guo et al. [6] considered the existence, uniqueness, and exponential stability of complex-valued memristor-based BAM neural networks with time delays; by using the Banach’s fixed point theorems and differential inequality techniques, Aouiti et al. [4] established the existence and global exponential stability results of pseudo almost periodic solution for neutral delay BAM neural networks with time-varying delay in leakage terms.

From the above references, we can see that the activation functions are continuous, Lipschitz continuous or even smooth. But, discontinuous activation can always exist since the influences from the external environment. In recent years, neural networks with discontinuous right-hand sides have been widely investigated owe to their practical application in mechanics, automatic control, and other various science and engineering fields. Due to the existence of the discontinuous right-hand sides, instability of neural networks often happens. Naturally, stability analysis for the discontinuous neural networks has become interesting and necessary. Based on the pioneer work of Forti and Nistri [17] in 2003, considerable investigations have investigated the dynamic behaviors of neural network systems with discontinuous activation functions, see, to name a few, [18], [19], [20], [21], [22], [23], [24], [25], [26].

1.2 Motivations of this paper

However, compared with the BAM neural networks and discontinuous systems, little attention has been devoted to the dynamic behavior of BAM neural networks with discontinuous activations so far, see [27], [28]. In addition, though the almost periodic solutions for the BAM neural networks with continuous activations with has been considered [29], to the best of our knowledge, to date, the almost-periodic dynamical behaviors for the BAM neural networks with discontinuous activations, discrete and distributed delays have not been touched yet.

In order to solve the deficiencies mentioned above partially, in this paper, we consider a class of discontinuous BAM neural networks with discrete and distributed delays.

1.3 Highlights of this paper

The main contributions of this paper are summarized as follows.

  1. Most of the existing results on the stability analysis of the BAM neural networks ignore the influence of the discontinuous activations, discrete and distributed time-delays on the system. And, a lot of previous results on the BAM neural networks assumed that the activation functions are continuous, Lipschitz continuous or even smooth. In this paper, we focus on the study of discontinuous BAM neural networks with discrete and distributed delays. Hence, a lot of previous neural network models can be included by our system, such as [4], [5], [9], [27], [28], [29].

  2. We study the almost periodic dynamical behaviors of discontinuous BAM neural networks with discrete and distributed delays for the first time. By using the set-valued map and differential inclusions theory, we prove the existence result of almost-periodic solution under new certain conditions.

  3. According to the definition of the globally exponentially stability, the globally exponentially stability of the proposed discontinuous BAM neural network are given without using any additional conditions.

  4. Two simulation examples are shown to illustrate the correctness of the established results.

The remainder part of this paper is organized as follows. In Section 2, some basic definitions and preliminary lemmas are introduced. In Section 3, some new criteria are given to establish the existence result of almost periodic solutions. In Section 4, global exponential stability analysis of the almost periodic solution is provided. In Section 5, we provide two numerical examples to demonstrate the theoretical results. Finally, some conclusions are stated in Section 6.

2 System description and preliminaries

2.1 System description

Consider the following discontinuous BAM neural networks with discrete and distributed delays:

(2.1) { d u i ( t ) d t = a i u i ( t ) + j = 1 m c i j ( t ) f j ( v j ( t ) ) + j = 1 m d i j ( t ) f j ( v j ( t τ i j ( t ) ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t f j ( v j ( s ) ) d s + I i ( t ) , d v j ( t ) d t = b j v j ( t ) + i = 1 n p j i ( t ) g i ( u i ( t ) ) + i = 1 n q j i ( t ) g i ( u i ( t σ j i ( t ) ) ) + i = 1 n h j i ( t ) t κ i j ( t ) t g i ( u i ( t ) ) d s + J j ( t ) ,

where i = 1, 2, … , n, j = 1, 2, … , m, u i (t), v j (t) denote the potential of the cell i and j at time t; a i , b j are positive constants, denoting the rate which the cell i and j rest their potential to the resting state when isolated from the other cells and inputs; c ij (t), d ij (t), e ij (t), p ji (t), q ji (t), h ji (t) denote the first and second-order connection weights of the neural networks; f i (⋅) and g j (⋅) denote the activation functions of the ith neurons and the jth neurons, respectively; I i (t), J j (t) are the ith and the jth components of an external inputs source introduced from outside the networks to the cell i and j; τ ij (t), σ ji (t) and δ ij (t), κ i j ( t ) denote the discrete time-varying delay and distribute time-varying delay, respectively and τ ij (t), σ ji (t) satisfy

0 < τ i j ( t ) τ , 0 < σ j i ( t ) σ ,

where τ and σ are positive constants.

Remark 1.

According to the ordinary differential equation theory, periodic solution should be defined up to ∞. But, given the long-term dynamical behaviors, the periodic parameters of dynamical system are often influenced by the uncertain perturbations. That means, parameters can be regarded as periodic up to a small error. In this situation, almost periodic behavior is more practical than the periodic behavior for the dynamical systems. Moreover, it must be pointed out that we will prove that the maximal existing interval of the almost-periodic solutions for the proposed discontinuous BAM neural network system (2.1) can be extended to + in this paper.

Throughout the paper, we guarantee that the following conditions are always true.

(H1).

f j and g i , i = 1, … , n, j = 1, … , m are piecewise continuous, i.e., f j and g i are continuous in except a countable set of jump discontinuous points, and in every compact set of , have only a finite number of jump discontinuous points.

(H2).

for all ( u ( t ) , v ( t ) ) = ( u 1 ( t ) , , u n ( t ) , v 1 ( t ) , , v m ( t ) ) , u i (i = i = 1, … , n), v j ( j = 1 , , m ) ( 0 , + ) and γ j c o [ f j ( v j ) ] , η i c o [ g i ( u i ) ] are bounded, monotonically non-decreasing, where

c o [ f j ( v ) ] = [ min { f j ( v j ) , f j ( v j + ) } , max { f j ( v j ) , f j ( v j + ) } ] , c o [ g i ( x ) ] = [ min { g i ( u i ) , g i ( u i + ) } , max { g i ( u i ) , g i ( u i + ) } ] .

(H2*).

for all ( u ( t ) , v ( t ) ) = ( u 1 ( t ) , , u n ( t ) , v 1 ( t ) , , v m ( t ) ) , u i (i = i = 1, … , n), v j ( j = 1 , , m ) ( 0 , + ) and γ j c o [ f j ( v j ) ] , η i c o [ g i ( u i ) ] are bounded, monotonically non-increasing, where

c o [ f j ( v ) ] = [ min { f j ( v j ) , f j ( v j + ) } , max { f j ( v j ) , f j ( v j + ) } ] , c o [ g i ( x ) ] = [ min { g i ( u i ) , g i ( u i + ) } , max { g i ( u i ) , g i ( u i + ) } ] .

2.2 Preliminaries

Let f (t) be a continuous ω-antiperiodic function defined on . We define

f M = sup t | f ( t ) | ,   f L = inf t | f ( t ) | .

For any vector v = ( v 1 , v 2 , , v n ) and matrix D = ( d i j ) n × n , we define the norm as

v = i = 1 n | v i | , D = i , j = 1 n | d i j | .

Let φ ( s ) = ( φ 1 ( s ) , φ 2 ( s ) , , φ n ( s ) ) and ψ ( s ) = ( ψ 1 ( s ) , ψ 2 ( s ) , , ψ m ( s ) ) , where φ i ( s ) C ( [ τ , 0 ] , ) , i = 1, 2, … ,n, ψ j ( s ) C ( [ σ , 0 ] , ) , j = 1, 2, … , m. Define

| | φ | | = sup τ s 0 i = 1 n | φ i ( s ) | ,   ψ = sup σ s 0 j = 1 m | ψ i ( s ) | .

Consider the dynamic system defined by the following differential equation

(2.2) x ˙ ( t ) = f ( t , x ( t ) ) , x ( 0 ) = x 0 , t 0 ,

where x (t) represents the state variable. If f (t, x (t)) is continuous with respect to x (t), then according to Peano’s theorem, the existence of a continuously differentiable solution can be guaranteed. If f (t, x (t)) is locally measurable function but is discontinuous with respect to x (t), then the solution of the Cauchy problem (2.2) will be discussed under the Filippov sense [30].

Definition 2.1.

Suppose f ( t , x ( t ) ) : + × n n is Lebesgue measurable and locally bounded uniformly in time. A vector function x (t) is called to be a Filippov solution of system (2.2) if x (t) is absolutely continuous and satisfying the following differential inclusion

x ˙ ( t ) K [ f ( t , x ( t ) ) ] , a . e . t [ 0 , t 1 ] ,

where t 1 + or + and the set-valued function K [ f ( t , x ( t ) ) ] is defined as follows

K [ f ( t , x ( t ) ) ] δ > 0 μ ( N ) = 0 c o ( f ( t , B ( x , δ ) N ) ) ,

where c o ( S ) denotes the convex closure of set S, B (x, δ) is the open ball with the center at x and the radius δ , μ ( N ) represents the Lebesgue measure of the set N.

By Definition 2.1, u i ( t ) ( i = 1 , 2 , , n ) and v j ( t ) ( j = 1 , 2 , , m ) are the solutions of initial value problem (2.1) on [0, b) and b ( 0 , + ] , if u i ( t ) ( i = 1 , 2 , , n ) and v j ( t ) ( j = 1 , 2 , , m ) are absolutely continuous on any compact subinterval of [0, b) and satisfy the following inclusion:

{ d u i ( t ) d t = a i u i ( t ) + j = 1 m c i j ( t ) γ j ( t ) + j = 1 m d i j ( t ) γ j ( t τ i j ( t ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t γ j ( s ) ) d s + I i ( t ) , d v j ( t ) d t = b j v j ( t ) + i = 1 n p j i ( t ) η i ( t ) + i = 1 n q j i ( t ) η i ( t σ j i ( t ) ) + i = 1 n h j i ( t ) t κ j i ( t ) t η i ( s ) d s + J j ( t ) ,

for a.e. t [ 0 , b ) , i = 1, 2, … , n; j = 1, 2, … , m. Then, for i = 1, 2, … , n, j = 1, 2, … , m, the set-valued maps:

{ d u i ( t ) d t a i u i ( t ) + j = 1 m c i j ( t ) γ j ( t ) + j = 1 m d i j ( t ) γ j ( t τ i j ( t ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t γ j ( s ) ) d s + I i ( t ) , d v j ( t ) d t b j v j ( t ) + i = 1 n p j i ( t ) η i ( t ) + i = 1 n q j i ( t ) η i ( t σ j i ( t ) ) + i = 1 n h j i ( t ) t κ j i ( t ) t η i ( s ) d s + J j ( t )

have nonempty compact convex values. Thus, they are upper semi-continuous and measurable. According to the measurable selection theorem, if u i (t) and v j (t) are the solution of system (2.1), there exists a measurable function κ ( t ) = ( η ( t ) , γ ( t ) ) = ( η 1 ( t ) , , η n ( t ) , γ 1 ( t ) , , γ m ( t ) ) : [ ς , b ) n + m , ς = max { τ , σ } , such that η i ( t ) c o [ g i ( u i ( t ) ) ] , γ j ( t ) c o [ f j ( v j ( t ) ) ] for almost all for a.e. t [ ς , b ) and

(2.3) { d u i ( t ) d t = a i u i ( t ) + j = 1 m c i j ( t ) γ j ( t ) + j = 1 m d i j ( t ) γ j ( t τ i j ( t ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t γ j ( s ) ) d s + I ( t ) , d v j ( t ) d t = b j v j ( t ) + i = 1 n p j i ( t ) η i ( t ) + i = 1 n q j i ( t ) η i ( t σ j i ( t ) ) + i = 1 n h j i ( t ) t κ j i ( t ) t η i ( s ) d s + J j ( t ) ,

for a . e . t [ 0 , b ) , i = 1 , 2 , , n ; j = 1 , 2 , , m .

Definition 2.2.

[28] The solution z * ( t ) = ( u * ( t ) , v * ( t ) ) of system (2.1) is said to be globally exponentially stable, if, for any z ( t ) = ( u ( t ) , v ( t ) ) of system (2.1), there exist constants α > 0 and λ > 0 such that

i = 1 n | u i ( t ) u i * ( t ) | + j = 1 m | v j ( t ) v j * ( t ) | λ e α t .

Suppose that x ( t ) : [ 0 , + ) n is absolutely continuous on any compact interval of [ 0 , + ) . We give a chain rule for computing the time derivative of the composed function V ( x ( t ) ) : [ 0 , + ) as follows.

Lemma 1.

(Chain Rule [22]). Suppose that V ( x ) : n is C-regular, and that x ( t ) : [ 0 , + ) n is absolutely continuous on any compact interval of [ 0 , + ) . Then, x(t) and V ( x ( t ) ) : [ 0 , + ) n are differential for a.e. t [ 0 , + ) , and we have

d V ( x ( t ) ) d t ξ ( t ) , d x ( t ) d t ,   ξ ( t ) V ( x ( t ) ) .

Lemma 2.

[31] Let A = ( a i 0 0 b j ) , α = min 1 i n , 1 j m { a i , b j } , then we have

exp ( A t ) 2 e α t , for all t 0 .

Definition 2.3.

[32], [33] A continuous function x ( t ) : n is said to be almost periodic on if for any ε > 0, the set T ( x , ε ) = { ω : | x ( t + ω ) x ( t ) | < ε , t } is relatively dense, that is, for any ε > 0, it is possible to find a real number l = l ( ε ) > 0 , for any interval with length l(ε), there exists a number ω(ε) in this interval such that | x ( t + ω ) x ( t ) | < ε , for all t .

3 Existence of almost periodic solution

Theorem 3.1.

Suppose that the assumptions (H1) and (H2) hold, then for any solution z ( t ) = ( u ( t ) , v ( t ) ) of discontinuous BAM neural network system (2.1) , there exist two positive constants

M : = 2 ( φ 2 + ψ 2 ) + 2 α max 1 i n j = 1 m [ ( c i j M + d i j M + e i j M δ i j M ) γ j M + I i M ] + 2 α max 1 j m i = 1 n [ ( p j i M + q j i M + h j i M κ i j M ) η j M + J j M ] ,

and M ˜ such that

| u i ( t ) | M , i = 1 , 2 , , n , | v j ( t ) | M , j = 1 , 2 , , m , for t [ ς , + ) , | η i ( t ) | M ˜ , i = 1 , 2 , , n , | γ j ( t ) | M ˜ , j = 1 , 2 , , m , for a . e . t [ ς , + ) .

Proof:

Define the set-valued maps

u i ( t ) a i u i ( t ) + j = 1 m c i j ( t ) γ j ( t ) + j = 1 m d i j ( t ) γ j ( t τ i j ( t ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t γ j ( s ) ) d s + I i ( t ) , i = 1 , 2 , ... , n ,

and

v j ( t ) b j v j ( t ) + i = 1 n p j i ( t ) η i ( t ) + i = 1 n q j i ( t ) η i ( t σ j i ( t ) ) + i = 1 n h j i ( t ) t κ i j ( t ) t η i ( s ) d s + J j ( t ) , j = 1 , 2 , , m .

By (H2), one can easily see that the above two set-valued maps are upper semi-continuous with nonempty compact convex values. Besides, according to the theorems in [22], [30], the local existence of a solution z ( t ) = ( u ( t ) , v ( t ) ) of (2.1) is clear. That implies, the IVP of (2.1) has at least one solution z ( t ) = ( u ( t ) , v ( t ) ) = ( u 1 , u 2 , , u n , v 1 , v 2 , , v m ) on [0, b) for some b [ 0 , + ] and the derivative of u i (t) and v j (t) are measurable selections from

a i u i ( t ) + j = 1 m c i j ( t ) c o [ f j ( v j ( t ) ) ] + j = 1 m d i j ( t ) c o [ f j ( v j ( t τ i j ( t ) ) ) ] + j = 1 m e i j ( t ) t δ i j ( t ) t c o [ f j ( v j ( s ) ] d s + I i ( t ) , for a . e . t [ 0 , b ) , i = 1 , 2 , , n ,

and

b j v j ( t ) + n i = 1 p j i ( t ) c o [ g i ( u i ( t ) ) ] + n i = 1 q j i ( t ) c o [ g i ( u i ( t σ j i ( t ) ) ) ] + i = 1 n h j i ( t ) t κ i j ( t ) t c o [ g i ( u i ( s ) ) ] d s + J j ( t ) , for a . e . t [ 0 , b ) , j = 1,2 , ... , m .

It follows from the Continuation Theorem [[34], Theorem 2, P78] that either b = + , or b < + and lim t b | | z ( t ) | | = + , where | | z ( t ) | | = sup t [ 0 , b ) { i = 1 n | u i ( t ) | + j = 1 m | v j ( t ) | } .

Next, we show that lim t b | | z ( t ) | | < + if b < + . That is to say that the maximal existing interval of u (t) can be extended to + . From Definition 2.1, there exists κ ( t ) = ( η ( t ) , γ ( t ) ) = ( η 1 ( t ) , , η n ( t ) , γ 1 ( t ) , , γ m ( t ) ) : [ ς , b ) n + m , where η i ( t ) c o [ g i ( u i ( t ) ) ] , γ j ( t ) c o [ f j ( v j ( t ) ) ] for almost all for a.e. t [ ς , b ) , such that

{ d u i ( t ) d t = a i u i ( t ) + j = 1 m c i j ( t ) γ j ( t ) + j = 1 m d i j ( t ) γ j ( t τ i j ( t ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t γ j ( s ) ) d s + I i ( t ) , d v j ( t ) d t = b j v j ( t ) + i = 1 n p j i ( t ) η i ( t ) + i = 1 n q j i ( t ) η i ( t σ j i ( t ) ) + j = 1 m h j i ( t ) t κ i j ( t ) t η i ( s ) d s + J j ( t ) ,

for a.e. t [ 0 , b ) , i = 1 , 2 , , n , j = 1 , 2 , , m .

Let

z i j ( t ) = ( u i ( t ) v j ( t ) ) ,   A = ( a i 0 0 b j ) , Θ i j ( t ) = ( I i ( t ) J j ( t ) ) ,

and

F i j ( η i ( t ) , γ j ( t ) ) = ( j = 1 m c i j ( t ) γ j ( t ) + j = 1 m d i j ( t ) γ j ( t τ i j ( t ) ) + j = 1 m e i j ( t ) t δ i j ( t ) t γ j ( s ) ) d s i = 1 n p j i ( t ) η i ( t ) + i = 1 n q j i ( t ) η i ( t σ j i ( t ) ) + i = 1 n h j i ( t ) t κ i j ( t ) t η i ( s ) d s ) .

Then system (2.1) can be rewritten as

(3.1) d z i j ( t ) d t = A z i j ( t ) + F i j ( η i ( t ) , γ j ( t ) ) + Θ i j ( t ) ,

where η i ( t ) c o [ g i ( u i ( t ) ) ] , γ j ( t ) c o [ f j ( v j ( t ) ) ] for a.e. t [ 0 , b ) , i = 1, 2, … , n, j = 1, 2, … , m.

Solving (3.1), we can obtain

z i j ( t ) = e A t z i j ( 0 ) + 0 t e A ( t s ) [ F i j ( η i ( s ) , γ j ( s ) ) + Θ i j ( s ) ] d s .

From Lemma 2, (H2) and (H3), it follows that

| | z i j ( t ) | | 2 e α t | | z i j ( 0 ) | | + 2 0 t e α ( t s ) [ | | F i j ( η i ( s ) , γ j ( s ) ) | | + | | Θ i j ( s ) | | ] d s 2 ( | | φ | | 2 + | | ψ | | 2 ) + 2 α ( 1 e α t ) { j = 1 m [ ( c i j M + d i j M + e i j M δ i j M ) γ j M + I i M ] + i = 1 n [ ( p j i M + q j i M + h j i M κ i j M ) η j M + J j M ] } 2 ( | | φ | | 2 + | | ψ | | 2 ) + 2 α max 1 i n j = 1 m [ ( c i j M + d i j M + e i j M δ i j M ) γ j M + I i M ] + 2 α max 1 j m i = 1 n [ ( p j i M + q j i M + h j i M κ i j M ) η j M + J j M ] : = M .

Then, we can obtain

(3.2) | u i ( t ) | M , i = 1 , 2 , , n , for t ( 0 , b ] | v j ( t ) | M , j = 1 , 2 , , m , for t ( 0 , b ] .

Hence, z ( t ) = ( u ( t ) , v ( t ) ) = ( u 1 ( t ) , , u n ( t ) , v 1 ( t ) , , v m ( t ) ) is bounded on its existence interval [ ς , b ) . Thus, lim t b | | z ( t ) | | < + , which implies that b = + . Therefore, by (3.2), we have

(3.3) | u i ( t ) | M , i = 1 , 2 , , n , t [ ς , + ) , | v j ( t ) | M , j = 1 , 2 , , m , t [ ς , + ) .

Furthermore, by (H1), we can see that f j has a finite number of discontinuous points on any compact interval of . Particularly, we can choose that f j has a finite number of discontinuous points on compact interval [−M, M]. Without loss of generality, let f j discontinuous at points { ρ k j : k = 1 , 2 , , l j } on the interval [−M, M] and assume that M < ρ 1 j < ρ 2 j < < ρ l j j < M . Let us consider a series of continuous functions:

f j 0 ( x ) = { f j ( x ) , x [ M , ρ 1 j ) , f j ( ρ 1 j 0 ) , x = ρ 1 j ; f j l j ( x ) = { f j ( ρ l j j + 0 ) , x = ρ l j j , f j ( x ) , x ( ρ l j j , M ] ;

and

f j k ( x ) = { f j ( ρ k j + 0 ) , x = ρ k j , f j ( x ) , x ( ρ k j , ρ k + 1 j ) , f j ( ρ k + 1 j 0 ) , x = ρ k + 1 j , k = 1 , 2 , , l j 1 .

Let

M j = max { max x [ M , ρ 1 j ] { f j 0 ( x ) } , max 1 k l j 1 { max x [ ρ k j , ρ k + 1 j ] { f j k ( x ) } } , max x [ ρ l j j , M ] { f j l j ( x ) } } , m j = max { min x [ M , ρ 1 j ] { f j 0 ( x ) } , min 1 k l j 1 { min x [ ρ k j , ρ k + 1 j ] { f j k ( x ) } } , min x [ ρ l j j , M ] { f j l j ( x ) } } .

It is clear that

| c o [ f j ( v j ( t ) ) ] | max { | M j | , | m j | } , j = 1 , 2 , , m .

Note that, γ j ( t ) c o [ f j ( v j ( t ) ) ] for a.e. t [ ς , + ) and j = 1, 2, … , m, we have

(3.4) | γ j ( t ) | max { | M j | , | m j | } , for a . e . t [ ς , + ) , j = 1 , 2 , , m .

In a similar way, let

M i ˜ = max { max x [ M , ρ 1 i ] { g i 0 ( x ) } , max 1 k l i 1 { max x [ ρ k i , ρ k + 1 i ] { g i k ( x ) } } , max x [ ρ l i i , M ] { g i l i ( x ) } } , m i ˜ = max { min x [ M , ρ 1 i ] { g i 0 ( x ) } , min 1 k l i 1 { min x [ ρ k i , ρ k + 1 i ] { g i k ( x ) } } , min x [ ρ l i i , M ] { g i l i ( x ) } } .

It is clear that

| c o [ g i ( u i ( t ) ) ] | max { | M i ˜ | , | m i ˜ | } , i = 1 , 2 , , n .

Note that, η i ( t ) c o [ g i ( u i ( t ) ) ] for a.e. t [ ς , + ) and i = 1, 2, … , n, we have

(3.5) | η i ( t ) | max { | M i ˜ | , | m i ˜ | } , for a . e . t [ ς , + ) , i = 1 , 2 , , n .

Let

M ˜ = max 1 i n , 1 j m { | M j | , | m j | , | M i ˜ | , | m i ˜ } .

Then, from (3.4) and (3.5), it follows that

| η i ( t ) | M ˜ , i = 1 , 2 , , n , for a . e . t [ ς , + ) , | γ j ( t ) | M ˜ , j = 1 , 2 , , m , for a . e . t [ ς , + ) .

Theorem 3.2.

Suppose that the assumptions (H1)–(H2) hold and the following condition is satisfied:

(H3).

For i = 1, 2, … , n, and s , the delays τ ij (t), δ ij (t), σ ij (t) and κ i j ( t ) are nonnegative continuous almost periodic functions; a i (t), c ij (t), d ij (t), e ij (t), b j (t), p ji (t), q ji (t), h ji (t), I i (t) and J j (t) are continuous almost periodic functions, that is, for any ε > 0, there exists l = l ( ε ) > 0 such that for any interval [ α , α + l ] , there is ω [ α , α + l ] such that

| c i j ( t + ω ) c i j ( t ) | < ε , |   d i j ( t + ω ) d i j ( t ) | < ε , |   e i j ( t + ω ) e i j ( t ) | < ε , | I i ( t + ω ) I i ( t ) | < ε , |   τ i j ( t + ω ) τ i j ( t ) | < ε , |   δ i j ( t + ω ) δ i j ( t ) | < ε , | p j i ( t + ω ) p j i ( t ) | < ε , |   q j i ( t + ω ) q j i ( t ) | < ε , |   h j i ( t + ω ) h j i ( t ) | < ε , | J j ( t + ω ) J j ( t ) | < ε , |   σ j i ( t + ω ) σ j i ( t ) | < ε , |   κ j i ( t + ω ) κ j i ( t ) | < ε ,

hold for all i = 1, 2, … , n, j = 1, 2, … , m and t .

(H4).

The delays τ ij (t) and σ ij (t) are continuously differentiable function and satisfy τ i j ( t ) 1 , σ i j ( t ) 1 , for i = 1,2, … ,n, j = 1,2, … ,m. Moreover, there exist positive constants ξ 1 , ξ 2 , , ξ n + m and ρ > 0 such that a i L ρ , b j L ρ and

lim sup t + ϒ i ( t ) < 0 , i = 1 , 2 , , n ; and   lim sup t + Γ j ( t ) < 0 ,   j = 1 , 2 , , m ,

where

ϒ i ( t ) = ξ i c i i ( t ) + j = 1 , j i m ξ j | c j i ( t ) | + j = 1 m ξ j | d j i ( φ j i 1 ( t ) ) | 1 τ j i ( φ j i 1 ( t ) ) e ρ τ + j = 1 n ξ j δ j i ( t ) t δ j i ( t ) t | e j i ( u + δ j i ( t ) ) | e ρ [ u + δ j i ( t ) t ] + j = 1 n ξ j δ j i ( t ) 0 | e j i ( t s ) | e ρ ( t s ) d s , Γ j ( t ) = ξ j p j j ( t ) + i = 1 , i j n ξ i | p i j ( t ) | + i = 1 n ξ i e ρ σ | q i j ( φ ˜ i j 1 ( t ) ) | 1 σ i j ( φ ˜ i j 1 ( t ) ) + n i = 1 ξ i κ i j ( t ) t κ i j ( t ) t | h i j ( u + κ i j ( t ) ) | e ρ [ u + κ i j ( t ) t ] d u + i = 1 n ξ i κ i j ( t ) 0 | h i j ( t s ) | e s d s ,

φ i j 1 is the inverse function of φ i j ( t ) = t τ i j ( t ) and φ ˜ i j 1 is the inverse function of φ ˜ i j ( t ) = t σ i j ( t ) .

Then any solution z ( t ) = ( u ( t ) , v ( t ) ) of the discontinuous BAM neural network system (2.1) associated with an output κ ( t ) = ( η ( t ) , γ ( t ) ) is asymptotically almost periodic, i.e., for any ε > 0, there exist T > 0, l = l(ε) and ω = ω(ε) in any interval with the length of l ( ε ) , such that

n i = 1 | u i ( t + ω ) u i ( t ) | + m j = 1 | v j ( t + ω ) v j ( t ) | ε , for all t T .

Proof.

From (H3), it follows that, for any ε > 0 , there exists l = l(ε) such that for any α there exists ω [ ω , ω + l ] satisfying the following inequalities:

(3.6) | I i ( t + ω ) I i ( t ) | < ξ m ρ ε 48 n ξ M , | J j ( t + ω ) J j ( t ) | < ξ m ρ ε 48 m ξ M , | c i j ( t + ω ) c i j ( t ) | < ξ m ρ ε 48 n m M ˜ ξ M , | p j i ( t + ω ) p j i ( t ) | < ξ m ρ ε 48 n m M ˜ ξ M , | d i j ( t + ω ) d i j ( t ) | < ξ m ρ ε 48 n m M ˜ ξ M , | q j i ( t + ω ) q j i ( t ) | < ξ m ρ ε 48 n m M ˜ ξ M , | e i j ( t + ω ) e i j ( t ) | < ξ m ρ ε 48 n m δ i j M M ˜ ξ M , | h j i ( t + ω ) h j i ( t ) | < ξ m ρ ε 48 n m κ i j M M ˜ ξ M ,

where ξ m : = min 1 i n + m { ξ i } max 1 i n + m { ξ i } : = ξ M . Furthermore, in view of (H1) and γ j ( t ) c o [ f j ( v j ( t ) ) ] ( j = 1 , 2 , , m ) , η i ( t ) c o [ g i ( u i ( t ) ) ] ( i = 1 , 2 , , n ) for a.e. t [ ς , + ) , then we can see that

(3.7) | γ j ( t + ω δ i j ( t + ω ) ) γ j ( t + ω δ i j ( t ) ) | < ξ m ρ ε 48 n m d i j M ξ M , for a . e . t [ ς , + ) , | t + ω δ i j ( t + ω ) t + ω γ j ( s ) d s t + ω δ i j ( t ) t + ω γ j ( s ) d s | < ξ m ρ ε 48 n m e i j M ξ M , for a . e . t [ ς , + ) ; | η i ( t + ω κ i j ( t + ω ) ) η i ( t + ω κ i j ( t ) ) | < ξ m ρ ε 48 n m q i j M ξ M , for a . e . t [ ς , + ) , | t + ω κ i j ( t + ω ) t + ω η i ( s ) d s t + ω κ i j ( t ) t + ω η i ( s ) d s | < ξ m ρ ε 48 n m h i j M ξ M , for a . e . t [ ς , + ) .

Let

Φ i ( t , ω ) = j = 1 m [ c i j ( t + ω ) c i j ( t ) ] γ j ( t + ω ) + j = 1 m [ d i j ( t + ω ) d i j ( t ) ] γ j ( t + ω τ i j ( t + ω ) ) + j = 1 m [ e i j ( t + ω ) e i j ( t ) ] t + ω δ i j ( t + ω ) t + ω γ j ( s ) d s + [ I i ( t + ω ) I i ( t ) ] , Ψ i ( t , ω ) = j = 1 m d i j ( t ) [ γ j ( t + ω τ i j ( t + ω ) ) γ j ( t + ω τ i j ( t ) ) ] + j = 1 m e i j ( t ) [ t + ω δ i j ( t + ω ) t + ω γ j ( s ) d s t + ω δ i j ( t ) t + ω γ j ( s ) d s ] ,

and

Φ ˜ j ( t , ω ) = i = 1 n [ p j i ( t + ω ) p j i ( t ) ] η i ( t + ω ) + i = 1 n [ q j i ( t + ω ) q j i ( t ) ] η i ( t + ω σ i j ( t + ω ) ) + i = 1 n [ h j i ( t + ω ) h j i ( t ) ] t + ω κ j i ( t + ω ) t + ω η i ( s ) d s + [ J j ( t + ω ) J j ( t ) ] , Ψ ˜ j ( t , ω ) = i = 1 n q j i ( t ) [ η i ( t + ω σ j i ( t + ω ) ) η i ( t + ω σ j i ( t ) ) ] + i = 1 n h j i ( t ) [ t + ω κ j i ( t + ω ) t + ω η i ( s ) d s t + ω κ j i ( t ) t + ω η i ( s ) d s ]

Then, by (3.6) and (3.7), we can have

(3.8) Φ i ( t , ω ) ξ m ρ ε 48 n ξ M , Ψ i ( t , ω ) ξ m ρ ε 48 n ξ M ; Φ ˜ j ( t , ω ) ξ m ρ ε 48 m ξ M ,   Ψ ˜ j ( t , ω ) ξ m ρ ε 48 m ξ M .

Define ϑ i ( t ) = sig n { u i ( t + ω ) u i ( t ) } if u i ( t + ω ) u i ( t ) , and ϑ i ( t ) can be arbitrarily choosen in [−1, 1] if u i ( t + ω ) = u i ( t ) . In particular, let ϑ i ( t ) be as follows

ϑ i ( t ) = { 0 , u i ( t + ω ) u i ( t ) = η i ( t + ω ) η i ( t ) = 0 , sign { η i ( t + ω ) η i ( t ) } , u i ( t + ω ) = u i ( t ) and η i ( t + ω ) η i ( t ) , sign { u i ( t + ω ) u i ( t ) } , u i ( t + ω ) u i ( t ) .

Then, we have

(3.9) ϑ i ( t ) { u i ( t + ω ) u i ( t ) } = | u i ( t + ω ) u i ( t ) | , i = 1 , 2 , , n , ϑ i ( t ) { η i ( t + ω ) η i ( t ) } = | η i ( t + ω ) η i ( t ) | , i = 1 , 2 , , n .

Similarly, define θ j ( t ) = sig n { v j ( t + ω ) v j ( t ) } if v j ( t + ω ) v j ( t ) , and θ j (t) can be arbitrarily choosen in [−1, 1] if v j ( t + ω ) = v j ( t ) . In particular, let θ j (t) be as follows

θ j ( t ) = { 0 , v j ( t + ω ) v j ( t ) = γ j ( t + ω ) γ j ( t ) = 0 , sign { γ j ( t + ω ) γ j ( t ) } , v j ( t + ω ) = v j ( t ) and γ j ( t + ω ) γ j ( t ) , sign { v j ( t + ω ) v j ( t ) } , v j ( t + ω ) v j ( t ) .

Then, we have

(3.10) θ j ( t ) { v j ( t + ω ) v j ( t ) } = | v j ( t + ω ) v j ( t ) | , j = 1 , 2 , , m , θ j ( t ) { γ j ( t + ω ) γ j ( t ) } = | γ j ( t + ω ) γ j ( t ) | , j = 1 , 2 , , m .

Let y i ( t ) = u i ( t + ω ) u i ( t ) , z j ( t ) = v j ( t + ω ) v j ( t ) , i = 1, 2, … , n, j = 1, 2, … , m. Then, it follows from system (2.2) that

(3.11) { d y i ( t ) d t = a i y i ( t ) + j = 1 m c i j ( t ) [ γ j ( t + ω ) γ j ( t ) ] + j = 1 m d i j ( t ) [ γ j ( t + ω τ i j ( t ) ) γ j ( t τ i j ( t ) ) ] + j = 1 m e i j ( t ) t δ i j ( t ) t [ γ j ( s + ω ) γ j ( s ) ] d s + Φ i ( t , ω ) + Ψ i ( t , ω ) , d z j ( t ) d t = b j z j ( t ) + i = 1 n p j i ( t ) [ η i ( t + ω ) η i ( t ) ] + i = 1 n q j i ( t ) [ η i ( t + ω σ j i ( t ) ) η i ( t σ j i ( t ) ) ] + i = 1 n h j i ( t ) t κ j i ( t ) t [ η i ( s + ω ) η i ( s ) ] d s + Φ ˜ j ( t , ω ) + Ψ ˜ j ( t , ω ) ,

Consider the following suitable Lyapunov function:

(3.12) V ( t ) = i = 1 n ξ i | y i ( t ) | e ρ t + j = 1 m ξ j | z j ( t ) | e ρ t + i = 1 n j = 1 m ξ i t τ i j ( t ) t | d i j ( φ i j 1 ( u ) ) | 1 τ i j ( φ i j 1 ( u ) ) | γ j ( u + ω ) γ j ( u ) | e ρ ( u + τ i j M ) d u + i = 1 n j = 1 m ξ i δ i j ( t ) 0 t + s t | e i j ( u s ) | | γ j ( u + ω ) γ j ( u ) | e ρ ( u s ) d u d s + j = 1 m i = 1 n ξ j t σ j i ( t ) t | q j i ( φ ˜ j i 1 ( u ) ) | 1 σ j i ( φ ˜ j i 1 ( u ) ) | η i ( u + ω ) η i ( u ) | e ρ ( u + σ j i M ) d u + j = 1 m i = 1 n ξ j κ j i ( t ) 0 t + s t | h j i ( u s ) | | η i ( u + ω ) η i ( u ) | e ρ ( u s ) d u d s .

Obviously, V (t) is regular. Meanwhile, the solutions ( u i ( t ) , v j ( t ) ) , ( u i ( t + ω ) , v j ( t + ω ) ) of the discontinuous system (2.1) are all absolutely continuous. Then, V (t) is differential for a.e. t ≥ 0. Then, by applying Lemma 1, for a.e. t ≥ 0, we can get that

d V ( t ) d t e ρ t i = 1 n ( ρ a i L ) ξ i | y i ( t ) | + i = 1 n ξ i c i i ( t ) | γ j ( t + ω ) γ j ( t ) | + i = 1 n j = 1 , j i m e ρ t ξ i | c i j ( t ) | | γ i ( t + ω ) γ i ( t ) | + i = 1 n e ρ t ξ i [ | Φ i ( t , ω ) | + | Ψ i ( t , ω ) | ] + i = 1 n j = 1 m ξ i | d i j ( φ i j 1 ( t ) ) | 1 τ i j ( φ i j 1 ( t ) ) | γ j ( t + ω ) γ j ( t ) | e ρ ( t + τ i j M ) + i = 1 n j = 1 m e ρ t ξ i δ i j ( t ) t δ i j ( t ) t | e i j ( u + δ i j ( t ) ) | | γ j ( u + ω ) γ j ( u ) | e ρ [ u + δ i j ( t ) t ] d u + i = 1 n j = 1 m ξ i δ i j ( t ) 0 | e i j ( t s ) | | γ j ( t + ω ) γ j ( t ) | e ρ ( t s ) d s + e ρ t j = 1 m ξ j ( ρ b j L ) | z j ( t ) | + j = 1 m e ρ t ξ j p j j ( t ) | η j ( t + ω ) η j ( t ) | + j = 1 m i = 1 , i j n e ρ t ξ j | p j i ( t ) | | η i ( t + ω σ j i ( t ) ) η i ( t σ j i ( t ) ) | + j = 1 m e ρ t ξ j [ | Φ ˜ j ( t , ω ) | + | Ψ ˜ j ( t , ω ) | ] + j = 1 m i = 1 n ξ j | q j i ( φ ˜ j i 1 ( t ) ) | 1 σ j i ( φ ˜ j i 1 ( t ) ) | η i ( t + ω ) η i ( t ) | e ρ ( t + σ j i M ) + j = 1 m i = 1 n e ρ t ξ j κ j i ( t ) t κ j i ( t ) t | h j i ( u + κ j i ( t ) ) | | η i ( u + ω ) η i ( u ) | e ρ [ u + κ j i ( t ) t ] d u + j = 1 m i = 1 n e ρ t ξ j κ j i ( t ) 0 | h j i ( t s ) | | η i ( t + ω ) η i ( t ) | e s d s .

in view of (3.6)(3.8), we further obtain

d V ( t ) d t e ρ t i = 1 n ( ρ a i L ) ξ i | y i ( t ) | + i = 1 n e ρ t { ξ i c i i ( t ) + j = 1 , j i m ξ j | c j i ( t ) | + m j = 1 ξ j | d j i ( φ j i 1 ( t ) ) | 1 τ j i ( φ j i 1 ( t ) ) e ρ τ i j M + j = 1 m ξ j δ j i ( t ) t δ j i ( t ) t | e j i ( u + δ j i ( t ) ) | e ρ [ u + δ j i ( t ) t ] + j = 1 m ξ j δ j i ( t ) 0 | e j i ( t s ) | e ρ ( t s ) d s } | γ i ( t + ω ) γ i ( t ) | + n i = 1 e ρ t ξ i [ | Φ i ( t , ω ) | + | Ψ i ( t , ω ) | ] + e ρ t m j = 1 ξ j ( ρ b j L ) | z j ( t ) | + j = 1 m e ρ t { ξ j p j j ( t ) + i = 1 , i j n ξ i | p i j ( t ) | + i = 1 n ξ i | q i j ( φ ˜ i j 1 ( t ) ) | 1 σ i j ( φ ˜ i j 1 ( t ) ) e ρ σ j i M + i = 1 n ξ i κ i j ( t ) t κ i j ( t ) t | h i j ( u + κ i j ( t ) ) | e ρ [ u + κ i j ( t ) t ] d u + i = 1 n ξ i κ i j ( t ) 0 | h i j ( t s ) | e s d s } | η j ( t + ω ) η j ( t ) | + j = 1 m e ρ t ξ j [ | Φ ˜ j ( t , ω ) | + | Ψ ˜ j ( t , ω ) | ] = e ρ t i = 1 n ( ρ a i L ) ξ i | y i ( t ) | + i = 1 n e ρ t ϒ i ( t ) | γ i ( t + ω ) γ i ( t ) | + i = 1 n e ρ t ξ i [ | Φ i ( t , ω ) | + | Ψ i ( t , ω ) | ] + e ρ t j = 1 m ξ j ( ρ b j L ) | z j ( t ) | + j = 1 m e ρ t Γ j ( t ) | η j ( t + ω ) η j ( t ) | + m j = 1 e ρ t ξ j [ | Φ ˜ j ( t , ω ) | + | Ψ ˜ j ( t , ω ) | ] ,

which together with (H4) and (3.8) gives

d V ( t ) d t i = 1 n e ρ t ξ i [ | Φ i ( t , ω ) | + | Ψ i ( t , ω ) | ] + j = 1 m e ρ t ξ j [ | Φ ˜ j ( t , ω ) | + | Ψ ˜ j ( t , ω ) | ] 2 i = 1 n e ρ t ξ i ξ m ρ ε 8 n ξ M + 2 j = 1 m e ρ t ξ j ξ m ρ ε 8 m ξ M i = 1 n e ρ t ξ i ξ m ρ ε 4 n ξ M + j = 1 m e ρ t ξ j ξ m ρ ε 4 m ξ M i = 1 n e ρ t ξ m ρ ε 4 n + j = 1 m e ρ t ξ m ρ ε 4 m , a . e . t 0 .

Thus, by (3.12), we have

ξ m e ρ t [ i = 1 n | y i ( t ) | + j = 1 m | z j ( t ) | ] W ( t ) W ( 0 ) + e ρ t ( i = 1 n ξ m ε 4 n + j = 1 m ξ m ε 4 m ) ,

which leads to

(3.13) i = 1 n | y i ( t ) | + j = 1 m | z j ( t ) | 1 ξ m e ρ t W ( 0 ) + ε 2 .

Moreover, from (3.12), W (0) is a constant. Then, we can choose a sufficiently large T > 0 such that

1 ξ m e ρ t W ( 0 ) ε 2 , for t T ,

which together with (3.13) and the arbitrariness of ε gives

i = 1 n | u i ( t + ω ) u i ( t ) | + j = 1 m | v j ( t + ω ) v j ( t ) | ε , for t T .

Therefore, the proof is complete.□

4 Global exponential stability

Theorem 4.1.

Suppose that the assumptions (H1)–(H4) are satisfied, then the almost periodic solution of the discontinuous BAMNN system (2.1) is globally exponentially stable.

Proof. Suppose that ( u * ( t ) , v * ( t ) ) = ( u 1 * ( t ) , , u n * ( t ) , v 1 * ( t ) , , v m * ( t ) ) is a solution of BAMNN system (2.1) with initial conditions

( u * ( s ) , v * ( s ) ) = ( φ * ( s ) , ψ * ( s ) ) , for a . e . s [ ς , 0 ] ,

the corresponding output solution ( η * ( t ) , γ * ( t ) ) = ( η 1 * ( t ) , , η n * ( t ) , γ 1 * ( t ) , , γ m * ( t ) ) : [ ς , b ) 2n , such that η i * ( t ) c o [ g i ( u i * ( t ) ) ] , γ j * ( t ) c o [ f j ( v j * ( t ) ) ] .

Consider the following Lyapunov function:

(4.1) U ( t ) = i = 1 n ξ i | u i ( t ) u i * ( t ) | e ρ t + j = 1 m ξ j | v j ( t ) v j * ( t ) | e ρ t + i = 1 n j = 1 m ξ i t τ i j ( t ) t | d i j ( φ i j 1 ( u ) ) | 1 τ i j ( φ i j 1 ( u ) ) | γ j ( u ) γ j * ( u ) | e ρ ( u + τ i j M ) d u + i = 1 n j = 1 m ξ i δ i j ( t ) 0 t + s t | e i j ( u s ) | | γ j ( u ) γ j * ( u ) | e ρ ( u s ) d u d s + j = 1 m i = 1 n ξ j t σ j i ( t ) t | q j i ( φ ˜ j i 1 ( u ) ) | 1 σ j i ( φ ˜ j i 1 ( u ) ) | η i ( u ) η i * ( u ) | e ρ ( u + σ j i M ) d u + j = 1 m i = 1 n ξ j κ j i ( t ) 0 t + s t | h j i ( u s ) | | η i ( u ) η i * ( u ) | e ρ ( u s ) d u d s .

Obviously, U (t) is regular. Meanwhile, the ω-periodic solution ( u * ( t ) , v * ( t ) ) and any solution ( u ( t ) , v ( t ) ) of the discontinuous system (2.1) are all absolutely continuous. Then, U (t) is differential for a.e. t ≥ 0.

Define ϑ ˜ i ( t ) = sig n { u i ( t ) u i * ( t ) } if u i ( t ) u i * ( t ) , and ϑ ˜ i ( t ) can be arbitrarily choosen in [−1, 1] if u i ( t ) = u i * ( t ) . In particular, let ϑ ˜ i ( t ) be as follows

ϑ ˜ i ( t ) = { 0 , u i ( t ) u i * ( t ) = η i ( t ) η i * ( t ) = 0 , sig n { η i ( t ) η i * ( t ) } , u i ( t ) = u i * ( t ) and η i ( t ) η i * ( t ) , sig n { u i ( t ) u i * ( t ) } , u i ( t ) u i * ( t ) .

Then, we have

(4.2) ϑ ˜ i ( t ) { u i ( t ) u i * ( t ) } = | u i ( t ) u i * ( t ) | , i = 1 , 2 , , n , ϑ ˜ i ( t ) { η i ( t ) η i * ( t ) } = | η i ( t ) η i * ( t ) | , i = 1 , 2 , , n .

Similarly, define θ ˜ j ( t ) = sig n { v j ( t ) v j * ( t ) } if v j ( t ) v j * ( t ) , and θ ˜ j ( t ) can be arbitrarily choosen in [ 1 , 1 ] if v j ( t ) = v j * ( t ) . In particular, let θ ˜ j ( t ) be as follows

θ ˜ j ( t ) = { 0 , v j ( t ) v j * ( t ) = γ j ( t ) γ j * ( t ) = 0 , sig n { γ j ( t ) γ j * ( t ) } , v j ( t ) = v j * ( t ) and γ j ( t ) γ j * ( t ) , sig n { v j ( t ) v j * ( t ) } , v j ( t ) v j * ( t ) .

Then, we have

(4.3) θ ˜ j ( t ) { v j ( t ) v j * ( t ) } = | v j ( t ) v j * ( t ) | , j = 1 , 2 , , m , θ ˜ j ( t ) { γ j ( t ) γ j * ( t ) } = | γ j ( t ) γ j * ( t ) | , j = 1 , 2 , , m .

Now, by applying Lemma 1, calculate the time derivative of U (t) along the solution trajectories of system (2.1) in the sense of (2.1), then we can get for a.e. t ≥ 0 that

(4.4) d U ( t ) d t = ρ e ρ t [ i = 1 n ξ i | u i ( t ) u i * ( t ) | + j = 1 m ξ j | v j ( t ) v j * ( t ) | ] + i = 1 n e ρ t ξ i d | u i ( t ) u i * ( t ) | d t + j = 1 m e ρ t ξ j d | v j ( t ) v j * ( t ) | d t + i = 1 n j = 1 m ξ i | d i j ( φ i j 1 ( t ) ) | 1 τ i j ( φ i j 1 ( t ) ) | γ j ( t ) γ j * ( t ) | e ρ ( t + τ i j M ) + i = 1 n j = 1 m ξ i | d i j ( t ) | | γ j ( t τ i j ( t ) ) γ j * ( t τ i j ( t ) ) | e ρ ( t τ i j ( t ) + τ i j M ) + i = 1 n j = 1 m ξ i δ i j ( t ) t δ i j ( t ) t | e i j ( u + δ i j ( t ) ) | | γ j ( u ) γ j * ( u ) | e ρ ( u + δ i j ( t ) ) d u + i = 1 n j = 1 m ξ i δ i j ( t ) 0 | e i j ( t s ) | | γ j ( t ) γ j * ( t ) | e ρ ( t s ) d s i = 1 n j = 1 m ξ i δ i j ( t ) 0 | e i j ( t ) | | γ j ( t + s ) γ j * ( t + s ) | e ρ t d s i = 1 n j = 1 m ξ i δ i j ( t ) 0 | e i j ( t ) | | γ j ( t + s ) γ j * ( t + s ) | e ρ t d s + j = 1 m i = 1 n ξ j | q j i ( φ ˜ j i 1 ( t ) ) | 1 σ j i ( φ ˜ j i 1 ( t ) ) | η i ( t ) η i * ( t ) | e ρ ( t + σ j i M ) + j = 1 m i = 1 n ξ j | q j i | | η i ( t σ j i ( t ) ) η i * ( t σ j i ( t ) ) | e ρ ( t σ j i ( t ) + σ j i M ) + j = 1 m i = 1 n ξ j κ j i ( t ) t κ j i ( t ) t | h j i ( u + κ j i ( t ) ) | | η i ( u ) η i * ( u ) | e ρ ( u + κ i j ( t ) ) d u + j = 1 m i = 1 n ξ j κ j i ( t ) 0 | h j i ( t s ) | | η i ( t ) η i * ( t ) | e ρ ( t s ) d s j = 1 m i = 1 n ξ j κ j i ( t ) 0 | h j i ( t ) | | η i ( t + s ) η i * ( t + s ) | e ρ t d s .

Note that, by (2.1), we have

(4.5) { d [ u i ( t ) u i * ( t ) ] d t = a i [ u i ( t ) u i * ( t ) ] + j = 1 m c i j ( t ) [ γ j ( t ) γ j * ( t ) ] + j = 1 m d i j ( t ) [ γ j ( t τ i j ( t ) ) γ j * ( t τ i j ( t ) ) ] + j = 1 m e i j ( t ) t δ i j ( t ) t [ γ j ( s ) γ j * ( s ) ] d s , d [ v j ( t ) v j * ( t ) ] d t = b j [ v j ( t ) v j * ( t ) ] + i = 1 n p j i ( t ) [ η i ( t ) η i * ( t ) ] ( t ) + i = 1 n q j i ( t ) [ η i ( t σ j i ( t ) ) η i * ( t σ j i ( t ) ) ] + i = 1 n h j i ( t ) t κ j i ( t ) t [ η i ( s ) η i * ( s ) ] d s .

Substituting (4.5) into (4.4), in view of (4.2) and (4.3), we have

d U ( t ) d t e ρ t i = 1 n ξ i ( ρ a i L ) | u i ( t ) u i * ( t ) | + i = 1 n { ξ i c i i ( t ) + j = 1 m ξ j | c j i ( t ) | + j = 1 m ξ j | d j i ( φ j i 1 ( t ) ) | 1 τ j i ( φ j i 1 ( t ) ) e ρ τ i j M + j = 1 m ξ j δ j i ( t ) t δ j i ( t ) t | e j i ( u + δ j i ( t ) ) | e ρ [ u + δ j i ( t ) t ] + j = 1 m ξ j δ j i ( t ) 0 | e j i ( t s ) | e ρ ( t s ) d s } . | γ j ( t ) γ j ( t ) | + e ρ t j = 1 m ξ i ( ρ b i L ) | v j ( t ) v j * ( t ) | + j = 1 m { ξ j p j j ( t ) + i = 1 , i j n ξ i | p i j ( t ) | + i = 1 n ξ i e ρ σ j i M | q i j ( φ ˜ i j 1 ( t ) ) | 1 σ i j ( φ ˜ i j 1 ( t ) ) + i = 1 n ξ i κ i j ( t ) t κ i j ( t ) t | h i j ( u + κ i j ( t ) ) | e ρ [ u + κ i j ( t ) t ] d u + i = 1 n ξ i κ i j ( t ) 0 | h i j ( t s ) | e s d s } . | η i ( t ) η i ( t ) | .

which together with (H4) yields

d U ( t ) d t < 0 , for a . e . t 0 .

Furthermore, from (4.1), it follows that

U ( t ) i = 1 n ξ i | u i ( t ) u i * ( t ) | e ρ t + j = 1 m ξ j | v j ( t ) v j * ( t ) | e ρ t ,

thus,

i = 1 n | u i ( t ) u i * ( t ) | + j = 1 m | v j ( t ) v j * ( t ) | e ρ t ξ m U ( t ) e ρ t ξ m U ( 0 ) ,

where ξ m : = min 1 i n + m { ξ i } . Moreover, since U (0) is a constant. Thus, by Definition 2.2, we can see that the proof is complete.□

5 Corollaries

Corollary 5.1.

Suppose that the assumptions (H1) and (H2*) hold, then for any solution ( u ( t ) , v ( t ) ) of the discontinuous BAM neural network system (2.1) , there exist two positive constants

M : = 2 ( | | φ | | 2 + ψ 2 ) + 2 α max 1 i n j = 1 m [ ( c i j M + d i j M + e i j M δ i j M ) γ j M + I i M ] , + 2 α max 1 j m i = 1 n [ ( p j i M + q j i M + h j i M κ i j M ) η j M + J j M ] .

and M ˜ such that

| u i ( t ) | M , i = 1 , 2 , , n , | v j ( t ) | M , j = 1 , 2 , , m , for t [ ς , + ) , | η i ( t ) | M ˜ , i = 1 , 2 , , n , | γ j ( t ) | M ˜ , j = 1 , 2 , , m , for a . e . t [ ς , + ) .

Proof.

The proof is similar to that of Theorem 3.1, we omit it here.□

Corollary 5.2.

Suppose that the assumptions (H1), (H2*), (H3) and (H4) hold, Then any solution ( u ( t ) , v ( t ) ) of the discontinuous BAM neural network system (2.1) associated with an output ( η ( t ) , γ ( t ) ) is asymptotically almost periodic, i.e., for any ε > 0, there exist T > 0, l = l (ε) and ω = ω (ε) in any interval with the length of l (ε), such that

i = 1 n | u i ( t + ω ) u i ( t ) | + j = 1 m | v j ( t + ω ) v j ( t ) | ε , for all t T .

Proof.

The proof is similar to that of Theorem 3.2, we omit it here.

Corollary 5.3.

Suppose that the assumptions (H1), (H2*), (H3) and (H4) are satisfied, then the unique almost periodic solution of the discontinuous BAM neural network system (2.1) is globally exponentially stable.

Proof.

The proof is similar to that of Theorem 4.1, we omit it here.

6 Examples and numerical simulations

Example 6.1.

Consider the following BAM neural networks with discontinuous activations:

(6.1) { d u 1 ( t ) d t = a 1 u 1 ( t ) + c 11 ( t ) f 1 ( v 1 ( t ) ) + c 12 ( t ) f 2 ( v 2 ( t ) ) + d 11 ( t ) f 1 ( v 1 ( t τ 11 ( t ) ) ) + d 12 ( t ) f 2 ( v 2 ( t τ 12 ( t ) ) ) + I 1 ( t ) , d u 2 ( t ) d t = a 2 u 2 ( t ) + c 21 ( t ) f 1 ( v 1 ( t ) ) + c 22 ( t ) f 2 ( v 2 ( t ) ) + d 21 ( t ) f 1 ( v 1 ( t τ 21 ( t ) ) ) + d 22 ( t ) f 2 ( v 2 ( t τ 22 ( t ) ) ) + I 2 ( t ) , d v 1 ( t ) d t = b 1 v 1 ( t ) + p 11 ( t ) g 1 ( u 1 ( t ) ) + p 12 ( t ) g 2 ( u 2 ( t ) ) + q 11 ( t ) g 1 ( u 1 ( t σ 11 ( t ) ) ) + q 12 ( t ) g 2 ( u 2 ( t σ 12 ( t ) ) ) + J 1 ( t ) , d v 2 ( t ) d t = b 2 v 2 ( t ) + p 21 ( t ) g 1 ( u 1 ( t ) ) + p 22 ( t ) g 2 ( u 2 ( t ) ) + q 21 ( t ) g 1 ( u 1 ( t σ 21 ( t ) ) ) + q 22 ( t ) g 2 ( u 2 ( t σ 22 ( t ) ) ) + J 2 ( t ) ,

where i = 1, j = 1, τ i j ( t ) = 0.1 , σ i j ( t ) = 0.1 and

a 1 = 0.8 , a 2 = 0.7 ,

I 1 ( t ) = 0.2 sin 2 t + 0.1 sin 5 t , I 2 ( t ) = 0.3 cos 3 t 0.1 sin t ,

( c i j ) 2 × 2 = ( 0.8 0.2 0.1 0.9 ) , ( d i j ) 2 × 2 = ( 0.2 0.1 0.8 0.1 ) , ( e i j ) 2 × 2 = ( 0 0 0 0 ) ,

and

b 1 = 0.5 , b 2 = 0.6 ,

J 1 ( t ) = 0.2 sin 2 t + 0.1 sin 5 t , J 2 ( t ) = 0.3 cos 3 t 0.1 sin t ,

( p j i ) 2 × 2 = ( 0.8 0.3 0.7 0.8 ) , ( q j i ) 2 × 2 = ( 0.1 0.02 0.4 0.1 ) , ( h j i ) 2 × 2 = ( 0 0 0 0 ) .

Then, for i, j = 1, we can have ς = max { τ , δ } = 0.1 .

Moreover, let

f 1 ( x ) = f 2 ( x ) = { 0.2 x x 2 + 1 , 1 < x < 1 ; 0.2 x 1 x 2 + 1 , x > 1 , g 1 ( x ) = g 2 ( x ) = { arctan ( x ) + 1 , x < 1 ; arctan ( x ) 1 , x > 1 .

It is easy to see that the activation function f i (x) and g j (x) are discontinuous, bounded, monotonically non-decreasing. This fact can be seen in Figure 1.

Let ξ 1 = 15 , ξ 2 = 8 and ρ = 0.1, we can have a i L ρ , b j L ρ and

lim sup t + ϒ 1 ( t ) = lim sup t + { ξ 1 c 11 ( t ) + ξ 2 | c 21 ( t ) | + ξ 1 e ρ τ | d 11 ( φ 11 1 ( t ) ) | 1 τ 11 ( φ 11 1 ( t ) ) + ξ 2 e ρ τ | d 21 ( φ 21 1 ( t ) ) | 1 τ 21 ( φ 21 1 ( t ) ) } 2.304725 < 0 , lim sup t + ϒ 2 ( t ) = lim sup t + { ξ 2 c 22 ( t ) + ξ 1 | c 12 ( t ) | + ξ 1 e ρ τ | d 12 ( φ 12 1 ( t ) ) | 1 τ 12 ( φ 12 1 ( t ) ) + ξ 2 e ρ τ | d 22 ( φ 22 1 ( t ) ) | 1 τ 22 ( φ 22 1 ( t ) ) } 0.3575278 < 0 ,

and

lim sup t + Γ 1 ( t ) = lim sup t + { ξ 1 p 11 ( t ) + ξ 2 | p 21 ( t ) | + ξ 1 e ρ σ | q 11 ( φ ˜ 11 1 ( t ) ) | 1 σ 11 ( φ ˜ 11 1 ( t ) ) + ξ 2 e ρ σ | q 21 ( φ ˜ 21 1 ( t ) ) | 1 σ 21 ( φ ˜ 21 1 ( t ) ) } 0.8007393 < 0 , lim sup t + Γ 2 ( t ) = lim sup t + { ξ 2 p 22 ( t ) + ξ 1 | p 12 ( t ) | + ξ 1 e ρ σ | q 12 ( φ ˜ 12 1 ( t ) ) | 1 σ 12 ( φ ˜ 12 1 ( t ) ) + ξ 2 e ρ σ | q 22 ( φ ˜ 22 1 ( t ) ) | 1 σ 22 ( φ ˜ 22 1 ( t ) ) } 3.047193 < 0 .

Figure 1: 
(a) Discontinuous activation functions f

i
 (i = 1, 2) for system (6.1); (b) Discontinuous activation functions g

j
 (j = 1, 2) for system (6.1).
Figure 1:

(a) Discontinuous activation functions f i (i = 1, 2) for system (6.1); (b) Discontinuous activation functions g j (j = 1, 2) for system (6.1).

Obviously, the coefficients of system (6.1) satisfy all the conditions in Theorems 3.1, 3.2 and 4.1, thus we can conclude that system (6.1) possesses a unique almost periodic solution which is globally exponentially stable. This fact can be shown by the following Figures 24.

Figure 2: 
(a) Time-domain behavior of the state variables u
1 (t), u
2 (t), v
1 (t) and v
2 (t) for system (6.1) with random initial conditions; (b) Phase plane behavior of the state variables 




(


u
1


(
t
)

,

u
2


(
t
)


)



$\left({u}_{1}\left(t\right),{u}_{2}\left(t\right)\right)$


 and 




(


v
1


(
t
)

,

v
2


(
t
)


)



$\left({v}_{1}\left(t\right),{v}_{2}\left(t\right)\right)$


 for system (6.1).
Figure 2:

(a) Time-domain behavior of the state variables u 1 (t), u 2 (t), v 1 (t) and v 2 (t) for system (6.1) with random initial conditions; (b) Phase plane behavior of the state variables ( u 1 ( t ) , u 2 ( t ) ) and ( v 1 ( t ) , v 2 ( t ) ) for system (6.1).

Figure 3: 
(a) Three-dimensional trajectory of state variables u
1 (t), u
2 (t) and v
1 (t) for system (6.1); (b) Three-dimensional trajectory of state variables u
1 (t), u
2 (t) and v
2 (t) for system (6.1).
Figure 3:

(a) Three-dimensional trajectory of state variables u 1 (t), u 2 (t) and v 1 (t) for system (6.1); (b) Three-dimensional trajectory of state variables u 1 (t), u 2 (t) and v 2 (t) for system (6.1).

Figure 4: 
(a) Three-dimensional trajectory of state variables v
1 (t), v
2 (t) and u
1 (t) for system (6.1); (b) Three-dimensional trajectory of state variables v
1 (t), v
2 (t) and u
2 (t) for system (6.1).
Figure 4:

(a) Three-dimensional trajectory of state variables v 1 (t), v 2 (t) and u 1 (t) for system (6.1); (b) Three-dimensional trajectory of state variables v 1 (t), v 2 (t) and u 2 (t) for system (6.1).

Example 6.2.

Consider the following BAM neural networks with discontinuous activations:

(6.2) { d u 1 ( t ) d t = a 1 u 1 ( t ) + c 11 ( t ) f 1 ( v 1 ( t ) ) + c 12 ( t ) f 2 ( v 2 ( t ) ) + d 11 ( t ) f 1 ( v 1 ( t τ 11 ( t ) ) ) + d 12 ( t ) f 2 ( v 2 ( t τ 12 ( t ) ) ) + I 1 ( t ) , d u 2 ( t ) d t = a 2 u 2 ( t ) + c 21 ( t ) f 1 ( v 1 ( t ) ) + c 22 ( t ) f 2 ( v 2 ( t ) ) + d 21 ( t ) f 1 ( v 1 ( t τ 21 ( t ) ) ) + d 22 ( t ) f 2 ( v 2 ( t τ 22 ( t ) ) ) + I 2 ( t ) , d v 1 ( t ) d t = b 1 v 1 ( t ) + p 11 ( t ) g 1 ( u 1 ( t ) ) + p 12 ( t ) g 2 ( u 2 ( t ) ) + q 11 ( t ) g 1 ( u 1 ( t σ 11 ( t ) ) ) + q 12 ( t ) g 2 ( u 2 ( t σ 12 ( t ) ) ) + J 1 ( t ) , d v 2 ( t ) d t = b 2 v 2 ( t ) + p 21 ( t ) g 1 ( u 1 ( t ) ) + p 22 ( t ) g 2 ( u 2 ( t ) ) + q 21 ( t ) g 1 ( u 1 ( t σ 21 ( t ) ) ) + q 22 ( t ) g 2 ( u 2 ( t σ 22 ( t ) ) ) + J 2 ( t ) ,

where i = 1, j = 1, τ i j ( t ) = 0.1 , σ ij (t) = 0.2 and

a 1 = 0.5 , a 2 = 0.3 ,

I 1 ( t ) = 0.3 sin 2 t 0.1 sin 5 t , I 2 ( t ) = 0.2 cos 2 t + 0.1 sin t ,

( c i j ) 2 × 2 = ( 0.8 0.1 0.4 0.8 ) , ( d i j ) 2 × 2 = ( 0.2 0.1 0.2 0.4 ) , ( e i j ) 2 × 2 = ( 0 0 0 0 ) ,

and

b 1 = 0.4 , b 2 = 0.2 ,

J 1 ( t ) = 0.3 cos 3 t 0.1 sin t , J 2 ( t ) = 0.2 sin 2 t + 0.1 cos 5 t ,

( p j i ) 2 × 2 = ( 0.8 0.4 0.3 0.9 ) , ( q j i ) 2 × 2 = ( 0.05 0.1 0.1 0.1 ) , ( h j i ) 2 × 2 = ( 0 0 0 0 ) .

Then, for i, j = 1, we can have ς = max { τ , δ } = 0.2 .

Moreover, let

f 1 ( x ) = f 2 ( x ) = { 0.2 x x 2 + 1 , 1 < x < 1 ; 0.2 x + 1 x 2 + 1 , x > 1 . g 1 ( x ) = g 2 ( x ) = { arctan ( x ) + 1 , x < 1 ; arctan ( x ) 1 , x > 1 .

It is easy to see that the activation function f i (x) and g j (x) are discontinuous, bounded, monotonically non-increasing. This fact can be seen in Figure 5.

Figures 24. Let ξ 1 = 6 , ξ 2 = 4 and ρ = 0.1, we can have a i L ρ , b j L ρ and

lim sup t + ϒ 1 ( t ) = lim sup t + { ξ 1 c 11 ( t ) + ξ 2 | c 21 ( t ) | + ξ 1 e ρ τ | d 11 ( φ 11 1 ( t ) ) | 1 τ 11 ( φ 11 1 ( t ) ) + ξ 2 e ρ τ | d 21 ( φ 21 1 ( t ) ) | 1 τ 21 ( φ 21 1 ( t ) ) } 0.3318413 < 0 , lim sup t + ϒ 2 ( t ) = lim sup t + { ξ 2 c 22 ( t ) + ξ 1 | c 12 ( t ) | + ξ 1 e ρ τ | d 12 ( φ 12 1 ( t ) ) | 1 τ 12 ( φ 12 1 ( t ) ) + ξ 2 e ρ τ | d 22 ( φ 22 1 ( t ) ) | 1 τ 22 ( φ 22 1 ( t ) ) } 0.2758280 < 0 ,

and

lim sup t + Γ 1 ( t ) = lim sup t + { ξ 1 p 11 ( t ) + ξ 2 | p 21 ( t ) | + ξ 1 e ρ σ | q 11 ( φ ˜ 11 1 ( t ) ) | 1 σ 11 ( φ ˜ 11 1 ( t ) ) + ξ 2 e ρ σ | q 21 ( φ ˜ 21 1 ( t ) ) | 1 σ 21 ( φ ˜ 21 1 ( t ) ) } 2.16078 < 0 , lim sup t + Γ 2 ( t ) = lim sup t + { ξ 2 p 22 ( t ) + ξ 1 | p 12 ( t ) | + ξ 1 e ρ σ | q 12 ( φ ˜ 12 1 ( t ) ) | 1 σ 12 ( φ ˜ 12 1 ( t ) ) + ξ 2 e ρ σ | q 22 ( φ ˜ 22 1 ( t ) ) | 1 σ 22 ( φ ˜ 22 1 ( t ) ) } 0.9338752 < 0 .

Figure 5: 
(a) Discontinuous activation functions f

i
 (i = 1, 2) for system (6.2); (b) Discontinuous activation functions g

j
 (j = 1, 2) for system (6.2).
Figure 5:

(a) Discontinuous activation functions f i (i = 1, 2) for system (6.2); (b) Discontinuous activation functions g j (j = 1, 2) for system (6.2).

As a result, the coefficients of system (6.2) satisfy all the conditions in Theorem 5.1–5.3, thus we can conclude that system (6.2) possesses a unique almost periodic solution which is globally exponentially stable. This fact can be shown by the the following Figures 68.

Figure 6: 
(a) Time-domain behavior of the state variables u
1 (t), u
2 (t), v
1 (t) and v
2 (t) for system (6.2) with random initial conditions; (b) Phase plane behavior of the state variables 




(


u
1


(
t
)

,

u
2


(
t
)


)



$\left({u}_{1}\left(t\right),{u}_{2}\left(t\right)\right)$


 and 




(


v
1


(
t
)

,

v
2


(
t
)


)



$\left({v}_{1}\left(t\right),{v}_{2}\left(t\right)\right)$


 for system (6.2).
Figure 6:

(a) Time-domain behavior of the state variables u 1 (t), u 2 (t), v 1 (t) and v 2 (t) for system (6.2) with random initial conditions; (b) Phase plane behavior of the state variables ( u 1 ( t ) , u 2 ( t ) ) and ( v 1 ( t ) , v 2 ( t ) ) for system (6.2).

Figure 7: 
(a) Three-dimensional trajectory of state variables u
1 (t), u
2 (t) and v
1 (t) for system (6.2); (b) Three-dimensional trajectory of state variables u
1 (t), u
2 (t) and v
2 (t) for system (6.2).
Figure 7:

(a) Three-dimensional trajectory of state variables u 1 (t), u 2 (t) and v 1 (t) for system (6.2); (b) Three-dimensional trajectory of state variables u 1 (t), u 2 (t) and v 2 (t) for system (6.2).

Figure 8: 
(a) Three-dimensional trajectory of state variables v
1 (t), v
2 (t) and u
1 (t) for system (6.2); (b) Three-dimensional trajectory of state variables v
1 (t), v
2 (t) and u
2 (t) for system (6.2).
Figure 8:

(a) Three-dimensional trajectory of state variables v 1 (t), v 2 (t) and u 1 (t) for system (6.2); (b) Three-dimensional trajectory of state variables v 1 (t), v 2 (t) and u 2 (t) for system (6.2).

Remark 2.

From Example 6.1 and Example 6.2, one can see the activations are discontinuous, unbounded and non-monotonic, that means the activations are not continuous, Lipschitz continuous or smooth, which are different from the related references in the literature, such as [3], [4], [6], [7], [8], [10], [14], [15], [16]. The results established in the present paper extend the previous work about BAM neural networks to the discontinuous cases.

Remark 3.

Since the existence and globally exponential stability of the almost-periodic solutions of the discontinuous BAM neural networks with discrete and distributed delays has not been studied before, it is clearly to see that all results obtained in references cited therein are invalid for Example 6.1 and 6.2. Here a novel proof is put to use to establish some criteria which guarantee the existence and globally exponential stability of the almost-periodic solutions of the discontinuous BAM neural networks with discrete and distributed delays.

7 Conclusion

This paper presents a class of discontinuous BAM neural networks with discrete and distributed delays. Under the framework of the Filippov solution, by applying differential inclusions theory, fundamental solution matrix of coefficients, inequality technique and the non-smooth analysis theory with Lyapunov-like strategy, the existence and global exponential stability of the almost-periodic solutions for the addressed BAM neural networks have been proved. In addition, two typical numerical examples and the corresponding simulations have been presented at the end of this paper to illustrate the effectiveness and feasibility of the proposed criterion. It should be pointed out that it is the first time to investigate the almost-periodic dynamic behavior for the discontinuous BAM neural networks with discrete and distributed delays. Consequently, some related works known can be extended.

Moreover, little attention has been devoted to the study of the almost-periodic dynamic behavior of the BAM neural networks with discontinuous activations so far. This method affords a possible method to analyse the existence and globally exponential stability of the almost-periodic solutions problem of other delayed BAM neural networks with discontinuous activations, such as CG(Cohen-Grossberg)BAM neural networks with discontinuous activations, neutral type BAM neural networks with discontinuous activations, interval general BAM neural network with discontinuous activations and so on. These issues will be the topic of our future research.


Corresponding author: Fanchao Kong, School of Mathematics and Statistics, Anhui Normal University, Wuhu, Anhui 241000, P.R. China, E-mail:

Funding source: Natural Science Foundation of Anhui Province

Award Identifier / Grant number: 2008085QA14

Funding source: Talent Foundation of Anhui Normal University

Award Identifier / Grant number: 751965

Funding source: National Natural Science Foundation of China

Award Identifier / Grant number: 12001011

Acknowledgements

The authors thank the anonymous reviewers for their insightful suggestions which improved this work significantly.

  1. Authors’ contributions: All authors read and approved the manuscript.

  2. Research funding: This work is supported by the National Natural Science Foundation of China (No.12001011), Anhui Provincial Natural Science Foundation (No.2008085QA14) and the Talent Foundation of Anhui Normal University (No.751965).

  3. Competing interests: The authors declare that they have no competing interests.

References

[1] B. Kosko, “Adaptive bidirectional associative memories,” Appl. Opt., vol. 26, pp. 4947–4960, 1987, https://doi.org/10.1364/ao.26.004947.Search in Google Scholar

[2] B. Kosko, “Bidirectional associative memories,” IEEE Trans. Syst. Man Cybern. Syst., vol. 18, pp. 49–60, 1988, https://doi.org/10.1109/21.87054.Search in Google Scholar

[3] M. Ali, S. Saravanan, M. Rani, et al.., “Asymptotic stability of Cohen-Grossberg BAM neutral type neural networks with distributed time varying delays,” Neural Process. Lett., vol. 46, pp. 991–1007, 2017, https://doi.org/10.1007/s11063-017-9622-6.Search in Google Scholar

[4] C. Aouiti, I. Gharbia, J. Cao, M. S. M’hamdi, and A. Alsaedi, “Existence and global exponential stability of pseudo almost periodic solution for neutral delay BAM neural networks with time-varying delay in leakage terms,” Chaos Solitons Fractals, vol. 107, pp. 111–127, 2018, https://doi.org/10.1016/j.chaos.2017.12.022.Search in Google Scholar

[5] A. Chen, L. Huang, and J. Cao, “Existence and stability of almost periodic solution for BAM neural networks with delays,” Appl. Math. Comput., vol. 137, no. 1, pp. 177–193, 2003, https://doi.org/10.1016/s0096-3003(02)00095-4.Search in Google Scholar

[6] R. Guo, Z. Zhang, X. Liu, et al.., “Existence, uniqueness, and exponential stability analysis for complex-valued memristor-based BAM neural networks with time delays,” Appl. Math. Comput., vol. 311, pp. 100–117, 2017, https://doi.org/10.1016/j.amc.2017.05.021.Search in Google Scholar

[7] W. Peng, Q. Wu, and Z. Zhang, “LMI-based global exponential stability of equilibrium point for neutral delayed BAM neural networks with delays in leakage terms via new inequality technique,” Neurocomputing, vol. 199, pp. 103–113, 2016, https://doi.org/10.1016/j.neucom.2016.03.030.Search in Google Scholar

[8] C. Sowmiya, R. Raja, Q. X. Zhu, and G. Rajchakit, “Further mean-square asymptotic stability of impulsive discrete-time stochastic BAM neural networks with Markovian jumping and multiple time-varying delays,” J. Franklin Inst., vol. 356, no. 1, pp. 561–591, 2019, https://doi.org/10.1016/j.jfranklin.2018.09.037.Search in Google Scholar

[9] Z. Wang and L. Huang, “Global stability analysis for delayed complex-valued BAM neural networks,” Neurocomputing, vol. 173, pp. 2083–2089, 2016, https://doi.org/10.1016/j.neucom.2015.09.086.Search in Google Scholar

[10] Z. Zhang, W. Liu, and D. Zhou, “Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays,” Neutral Netw., vol. 25, pp. 94–105, 2012, https://doi.org/10.1016/j.neunet.2011.07.006.Search in Google Scholar PubMed

[11] Z. Zhang and Z. Quan, “Global exponential stability via inequality technique for inertial BAM neural networks with time delays,” Neurocomputing, vol. 151, pp. 1316–1326, 2015, https://doi.org/10.1016/j.neucom.2014.10.072.Search in Google Scholar

[12] Z. Zhang, A. Li, and L. Yang, “Global asymptotic periodic synchronization for delayed complex-valued BAM neural networks via vector-valued inequality techniques,” Neural Process. Lett., vol. 48, pp. 1019–1041, 2018, https://doi.org/10.1007/s11063-017-9722-3.Search in Google Scholar

[13] Z. Zhang and F. Lin, “Global asymptotic stability of periodic solutions for neutral-type delayed BAM neural networks by combining an abstract theorem of k-set contractive operator with LMI method,” Neural Process. Lett., vol. 50, pp. 1571–1588, 2019, https://doi.org/10.1007/s11063-018-9941-2.Search in Google Scholar

[14] F. Zhou and H. Yao, “Stability analysis for neutral-type inertial BAM neural networks with time-varying delays,” Nonlinear Dynam., vol. 92, no. 4, pp. 1583–1598, 2018, https://doi.org/10.1007/s11071-018-4148-7.Search in Google Scholar

[15] Q. X. Zhu, C. Huang, and X. Yang, “Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays,” Nonlinear Anal. Hybrid Syst., vol. 5, no. 1, pp. 52–77, 2011, https://doi.org/10.1016/j.nahs.2010.08.005.Search in Google Scholar

[16] Q. X. Zhu and J. Cao, “Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays,” IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 3, pp. 467–479, 2012, https://doi.org/10.1109/TNNLS.2011.2182659.Search in Google Scholar PubMed

[17] M. Forti, P. Nistri, and D. Papini, “Global convergence of neural networks with discontinuous neuron activations,” IEEE Trans. Circuits Syst. I. Fundam. Theory Appl., vol. 50, pp. 1421–1435, 2003, https://doi.org/10.1109/tcsi.2003.818614.Search in Google Scholar

[18] Z. W. Cai, L. H. Huang, Z. Y. Guo, and X. Y. Chen, “On the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions,” Neutral Netw., vol. 33, pp. 97–113, 2012, https://doi.org/10.1016/j.neunet.2012.04.009.Search in Google Scholar PubMed

[19] J. Cheng, J. H. Park, J. Cao, and W. Qi, “Hidden Markov model-based nonfragile state estimation of switched neural network with probabilistic quantized outputs,” IEEE Trans. Cybern., vol. 50, pp. 1900–1909, 2019, https://doi.org/10.1109/TCYB.2019.2909748.Search in Google Scholar PubMed

[20] J. Cheng, D. Zhang, W. Qi, J. Cao, and K. Shi, “Finite-time stabilization of T-S fuzzy Semi-Markov switching systems: a coupling memory sampled-data control approach,” J. Franklin Inst., vol. 357, pp. 11265–11280, 2019, https://doi.org/10.1016/j.jfranklin.2019.06.021.Search in Google Scholar

[21] J. Cheng, J. H. Park, X. Zhao, J. Cao, and W. Qi, “Static output feedback control of switched systems with quantization: a nonhomogeneous sojourn probability approach,” Int. J. Robust Nonlinear Control, vol. 29, no. 17, pp. 5992–6005, 2019, https://doi.org/10.1002/rnc.4703.Search in Google Scholar

[22] L. H. Huang, Z. Y. Guo, and J. F. Wang, Theory and Applications of Differential Equations with Discontinuous Right-Hand Sides, Beijing, Science Press, 2011, (in Chinese).Search in Google Scholar

[23] F. Kong, Q. Zhu, F. Liang, and J. Nieto, “Robust fixed-time synchronization of discontinuous Cohen-Grossberg neural networks with mixed time delays,” Nonlinear Anal. Model. Control, vol. 24, no. 4, pp. 603–625, 2019, https://doi.org/10.15388/na.2019.4.7.Search in Google Scholar

[24] F. Kong and J. Nieto, “Almost periodic dynamical behaviors of the hematopoiesis model with mixed discontinuous harvesting terms,” Discrete Continuous Dyn. Syst. Ser. B, vol. 24, pp. 233–239, 2019, https://doi.org/10.3934/dcdsb.2019107.Search in Google Scholar

[25] F. Kong, “Dynamical behaviors of the generalized hematopoiesis model with discontinuous harvesting terms,” Int. J. Biomath., vol. 12, no. 01, p. 1950009, 2019, https://doi.org/10.1142/s1793524519500098.Search in Google Scholar

[26] F. C. Kong, Q. X. Zhu, K. Wang, and J. J. Nieto, “Stability analysis of almost periodic solutions of discontinuous BAM neural networks with hybrid time-varying delays and D operator,” J. Franklin Inst., vol. 356, no. 18, pp. 11605–11637, 2019, https://doi.org/10.1016/j.jfranklin.2019.09.030.Search in Google Scholar

[27] Y. Huang, H. Zhang, and Z. Wang, “Multistability and multiperiodicity of delayed bidirectional associative memory neural networks with discontinuous activation functions,” Appl. Math. Comput., vol. 219, no. 3, pp. 899–910, 2012, https://doi.org/10.1016/j.amc.2012.06.068.Search in Google Scholar

[28] H. Wu and Y. Li, “Existence and stability of periodic solution for BAM neural networks with discontinuous neuron activations,” Comput. Math. Appl., vol. 56, pp. 1981–1993, 2008, https://doi.org/10.1016/j.camwa.2008.04.027.Search in Google Scholar

[29] H. Zhou, Z. F. Zhou, and W. Jiang, “Almost periodic solutions for neutral type BAM neural networks with distributed leakage delays on time scales,” Neurocomputing, vol. 157, pp. 223–230, 2015, https://doi.org/10.1016/j.neucom.2015.01.013.Search in Google Scholar

[30] A. F. Filippov, Mathematics and its Applications (Soviet Series), Differential Equations with Discontinuous Right-Hand Sides, Boston, Kluwer Academic Publishers, 1988.10.1007/978-94-015-7793-9Search in Google Scholar

[31] X. Wei and Z. Qiu, “Anti-periodic solutions for BAM neural networks with time delays,” Appl. Math. Comput., vol. 221, pp. 221–229, 2013, https://doi.org/10.1016/j.amc.2013.06.063.Search in Google Scholar

[32] A. M. Fink, Almost Periodic Differential Equations, Lecture Notes in Mathematics, vol. 377, Berlin, Springer, 1974.10.1007/BFb0070324Search in Google Scholar

[33] C. He, Almost Periodic Differential Equation, Beijing, Higher Education Publishing House, 1992, [In Chinese].Search in Google Scholar

[34] J. Aubin and A. Cellina, Differential Inclusions, Berlin, Springer-Verlag, 1984.10.1007/978-3-642-69512-4Search in Google Scholar

Received: 2020-03-09
Accepted: 2020-10-17
Published Online: 2020-11-13
Published in Print: 2021-12-20

© 2020 Weijun Xie et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 24.4.2024 from https://www.degruyter.com/document/doi/10.1515/ijnsns-2020-0052/html
Scroll to top button