Skip to main content
Log in

State estimation of T–S fuzzy Markovian generalized neural networks with reaction–diffusion terms: a time-varying nonfragile proportional retarded sampled-data control scheme

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

This paper focuses on the state estimation issue of T–S fuzzy Markovian generalized neural networks (GNNs) with reaction–diffusion terms. An estimator-based nonfragile time-varying proportional retarded sampled-data controller that permits norm-bounded indeterminacy and contains a time-varying delay is designed to guarantee the asymptotical stability of the error system. By establishing a novel Lyapunov–Krasovskii functional that involves positive indefinite items and discontinuous items, meanwhile, by combining the reciprocally convex combination method, Jenson’s inequality and Wirtinger inequality, a less conservative stability criterion can be derived. Moreover, the principle for the number of selected variables in the process of deriving main results is also analyzed. Finally, two numerical examples are given to demonstrate the validity and advantages of the results proposed in this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Zhang X, Han Q (2011) Global asymptotic stability for a class of generalized neural networks with interval time-varying delays. IEEE Trans Neural Netw 22(8):1180

    Google Scholar 

  2. Saravanakumar R, Syed AM, Ahn CK, Karimi HR, Shi P (2017) Stability of Markovian jump generalized neural networks with interval time-varying delays. IEEE Trans Neural Netw Learn Syst 28(8):1840–1850

    MathSciNet  Google Scholar 

  3. Samidurai R, Manivannan R, Ahn CK, Karimi HR (2016) New criteria for stability of generalized neural networks including Markov jump parameters and additive time delays. IEEE Trans Syst Man Cybern Syst 48(4):485–499

    Google Scholar 

  4. Chen G, Xia J, Zhuang G (2016) Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components. J Frankl Inst 353(9):2137–2158

    MathSciNet  MATH  Google Scholar 

  5. Rajchakit G, Saravanakumar R (2018) Exponential stability of semi-Markovian jump generalized neural networks with interval time-varying delays. Neural Comput Appl 29(2):483–492

    Google Scholar 

  6. Dharani S, Balasubramaniam P (2019) Delayed impulsive control for exponential synchronization of stochastic reaction–diffusion neural networks with time-varying delays using general integral inequalities. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04223-8

    Article  Google Scholar 

  7. Song X, Man J, Fu Z, Wang M, Lu J (2019) Memory-based state estimation of T–S fuzzy Markov jump delayed neural networks with reaction–diffusion terms. Neural Process Lett 50(3):2529–2546

    Google Scholar 

  8. Zeng D, Zhang R, Park JH, Pu Z, Liu Y (2019) Pinning synchronization of directed coupled reaction–diffusion neural networks with sampled-data communications. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2928039

    Article  Google Scholar 

  9. Wei H, Chen C, Tu Z, Li N (2018) New results on passivity analysis of memristive neural networks with time-varying delays and reaction–diffusion term. Neurocomputing 275:2080–2092

    Google Scholar 

  10. Song X, Wang M, Song S, Wang Z (2019) Intermittent pinning synchronization of reaction–diffusion neural networks with multiple spatial diffusion couplings. Neural Comput Appl 31(12):9279–9294

    Google Scholar 

  11. Huang Y, Hou J, Yang E (2019) General decay anti-synchronization of multi-weighted coupled neural networks with and without reaction–diffusion terms. Neural Comput Appl. https://doi.org/10.1007/s00521-019-04313-7

    Article  Google Scholar 

  12. Jiang B, Karimi HR, Kao Y, Gao C (2019) Takagi–Sugeno model based event-triggered fuzzy sliding mode control of networked control systems with semi-Markovian switchings. IEEE Trans Fuzzy Syst. https://doi.org/10.1109/TFUZZ.2019.2914005

    Article  Google Scholar 

  13. Ali MS, Gunasekaran N, Zhu Q (2017) State estimation of T–S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst 306:87–104

    MathSciNet  MATH  Google Scholar 

  14. Zhang Y, Shi P, Agarwal RK, Shi Y (2015) Dissipativity analysis for discrete time-delay fuzzy neural networks with Markovian jumps. IEEE Trans Fuzzy Syst 24(2):432–443

    Google Scholar 

  15. Arunkumar A, Sakthivel R, Mathiyalagan K, Park JH (2014) Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks. ISA Trans 53(4):1006–1014

    Google Scholar 

  16. Ali MS, Gunasekaran N, Saravanakumar R (2018) Design of passivity and passification for delayed neural networks with Markovian jump parameters via non-uniform sampled-data control. Neural Comput Appl 30(2):595–605

    Google Scholar 

  17. Liu Y, Tong L, Lou J, Lu J, Cao J (2019) Sampled-data control for the synchronization of Boolean control networks. IEEE Trans Cybern 49(2):726–732

    Google Scholar 

  18. Li L, Yang Y, Lin G (2016) The stabilization of BAM neural networks with time-varying delays in the leakage terms via sampled-data control. Neural Comput Appl 27(2):447–457

    Google Scholar 

  19. Chen W, Luo S, Zheng W (2017) Generating globally stable periodic solutions of delayed neural networks with periodic coefficients via impulsive control. IEEE Trans Cybern 47(7):1590–1603

    Google Scholar 

  20. Wang Y, Shen H, Duan D (2017) On stabilization of quantized sampled-data neural-network-based control systems. IEEE Trans Cybern 47(10):3124–3135

    Google Scholar 

  21. Wu Z, Xu Z, Shi P, Chen MZ, Su H (2018) Nonfragile state estimation of quantized complex networks with switching topologies. IEEE Trans Neural Netw Learn Syst 29(10):5111–5121

    MathSciNet  Google Scholar 

  22. Yue D, Han Q (2005) Delayed feedback control of uncertain systems with time-varying input delay. Automatica 41(2):233–240

    MathSciNet  MATH  Google Scholar 

  23. Zhang C, He Y, Jiang L, Wu Q, Wu M (2017) Delay-dependent stability criteria for generalized neural networks with two delay components. IEEE Trans Neural Netw Learn Syst 25(7):1263–1276

    Google Scholar 

  24. Chen W, Zheng W (2010) Robust stability analysis for stochastic neural networks with time-varying delay. IEEE Trans Neural Netw 21(3):508–514

    Google Scholar 

  25. Li T, Ye X (2010) Improved stability criteria of neural networks with time-varying delays—an augmented LKF approach. Neurocomputing 73(4–6):1038–1047

    Google Scholar 

  26. Zuo Z, Yang C, Wang Y (2010) A new method for stability analysis of recurrent neural networks with interval time-varying delay. IEEE Trans Neural Netw 21(2):339–344

    Google Scholar 

  27. Wu Z, Lam J, Su H, Chu J (2012) Stability and dissipativity analysis of static neural networks with time delay. IEEE Trans Neural Netw Learn Syst 23(2):199–210

    Google Scholar 

  28. Li T, Song A, Fei S, Wang T (2010) Delay-derivative-dependent stability for delayed neural networks with unbound distributed delay. IEEE Trans Neural Netw 21(8):1365

    Google Scholar 

  29. Liu Y, Park JH, Guo B, Shu Y (2018) Further results on stabilization of chaotic systems based on fuzzy memory sampled-data control. IEEE Trans Fuzzy Syst 26(2):1040–1045

    Google Scholar 

  30. Liu Y, Guo B, Park JH, Lee SM (2018) Nonfragile exponential synchronization of delayed complex dynamical networks with memory sampled-data control. IEEE Trans Neural Netw Learn Syst 29(1):118–128

    MathSciNet  Google Scholar 

  31. Zhang R, Zeng D, Park JH, Liu Y, Zhong S (2018) A new approach to stabilization of chaotic systems with nonfragile fuzzy proportional retarded sampled-data control. IEEE Trans Cybern 49(9):3218–3229

    Google Scholar 

  32. Park PG, Ko JW, Jeong C (2011) Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1):235–238

    MathSciNet  MATH  Google Scholar 

  33. Wu Z, Shi P, Su H, Chu J (2013) Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data. IEEE Trans Cybern 43(6):1796–1806

    Google Scholar 

  34. Xu Z, Su H, Shi P, Lu R, Wu Z (2016) Reachable set estimation for Markovian jump neural networks with time-varying delays. IEEE Trans Cybern 47(10):3208–3217

    Google Scholar 

  35. Ma Y, Zheng Y (2018) Delay-dependent stochastic stability for discrete singular neural networks with Markovian jump and mixed time-delays. Neural Comput Appl 29(1):111–122

    MathSciNet  Google Scholar 

  36. Xiao Q, Huang T, Zeng Z (2018) Passivity and passification of fuzzy memristive inertial neural networks on time scales. IEEE Trans Fuzzy Syst 26(6):3342–3355

    Google Scholar 

  37. Liu Y, Wang Z, Liu X (2006) Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw 19(5):667–675

    MATH  Google Scholar 

  38. Shen H, Huang X, Zhou J, Wang Z (2012) Global exponential estimates for uncertain Markovian jump neural networks with reaction–diffusion terms. Nonlinear Dyn 69(1–2):473–486

    MathSciNet  MATH  Google Scholar 

  39. Ali MS, Gunasekaran N (2018) Sampled-data state estimation of Markovian jump static neural networks with interval time-varying delays. J Comput Appl Math 343(C):217–229

    MathSciNet  MATH  Google Scholar 

  40. Wu Z, Shi P, Su H, Chu J (2014) Sampled-data fuzzy control of chaotic systems based on a T–S fuzzy model. IEEE Trans Fuzzy Syst 22(1):153–163

    Google Scholar 

  41. Rakkiyappan R, Dharani S (2017) Sampled-data synchronization of randomly coupled reaction–diffusion neural networks with Markovian jumping and mixed delays using multiple integral approach. Neural Comput Appl 28(3):1–14

    Google Scholar 

  42. Ali MS, Arik S, Saravanakumar R (2015) Delay-dependent stability criteria of uncertain Markovian jump neural networks with discrete interval and distributed time-varying delays. Neurocomputing 158:167–173

    Google Scholar 

  43. Huang D, Jiang M, Jian J (2017) Finite-time synchronization of inertial memristive neural networks with time-varying delays via sampled-date control. Neurocomputing 266:527–539

    Google Scholar 

  44. Guo Z, Gong S, Huang T (2018) Finite-time synchronization of inertial memristive neural networks with time delay via delay-dependent control. Neurocomputing 293:100–107

    Google Scholar 

  45. Gu K, Chen J, Kharitonov V (2003) Stability of time-delay systems. Birkhauser Boston, Inc., Secaucus

    MATH  Google Scholar 

  46. Guojun L (2008) Global exponential stability and periodicity of reaction–diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Chaos Solitons Fractals 35(1):116–125

    MathSciNet  Google Scholar 

  47. Wang Y, Xie L, De Souza CE (1992) Robust control of a class of uncertain nonlinear systems. Syst Control Lett 19(2):139–149

    MathSciNet  MATH  Google Scholar 

  48. Seuret A, Gouaisbaut F (2013) Wirtinger-based integral inequality: application to time-delay systems. Automatica 49(9):2860–2866

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

Project supported by National Natural Science Foundation of China (Nos. 61976081, U1604146) and Foundation for the University Technological Innovative Talents of Henan Province (No. 18HASTIT019).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaona Song.

Ethics declarations

Conflict of interest

The authors declared that they have no conflict of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Appendix 1: Crucial Lemmas

Lemma 1

[45] If there exists a matrix \(X \in {{\mathbb {R}}^{n \times n}}\), \(X = {X^\mathrm{T}} > 0\) and \(c \le s \le d\), then one has

$$\begin{aligned} -\int _c^d {{\alpha ^\mathrm{T}}(s)X\alpha (s)\mathrm{d}s} \le -&\frac{1}{{d - c}}{\left( {\int _c^d {\alpha (s)} \mathrm{d}s} \right) ^\mathrm{T}}\\&X\int _c^d {\alpha (s)} \mathrm{d}s \end{aligned}$$

Lemma 2

[46] Let \(\Omega\) be a cube \(\left| {{x_k}} \right| < {\tilde{l}_k}(k = 1,2,\ldots ,m)\), \(\nu (x)\) be a real-valued function belonging to \({C^1}(\Omega )\) which satisfies \(\nu (x)\left| {_{\partial \Omega }} \right. = 0\). Then

$$\begin{aligned} \int _\Omega {\nu ^2(x)\mathrm{d}x} \le \tilde{l}_k^2\int _\Omega {\left| {\frac{{\partial \nu (x)}}{{\partial {x_k}}}} \right| ^2\mathrm{d}x} \end{aligned}$$

Lemma 3

[32] Let \({g_1},{g_2},\ldots ,{g_N}:{{\mathbb {R}}^m} \rightarrow {{\mathbb {R}}^1}\) have positive values in an open subset E of \({{\mathbb {R}}^m}\). Then, the reciprocally convex combination of \({g_i}\) over E satisfies

$$\begin{aligned} & \left\{ {\nu _i}\left| {{\nu _i} > 0,\sum \limits _i {{\nu _i}} = 1} \right. \right\} \sum \limits _i {\frac{1}{{{\nu _i}}}{g_i}(t)}\\&= \sum \limits _i {{g_i}(t)} + \mathop {\max }\limits _{{f_{i,j}}(t)} \sum \limits _{i \ne j} {{f_{i,j}}(t)} \end{aligned}$$

subject to

$$\begin{aligned} \{ {f_{i,j}}:{{{\mathbb {R}}}^m} \rightarrow {{{\mathbb {R}}}^1},{f_{i,j}}(t) = {f_{j,i}}(t),\left[ {\begin{array}{*{20}{c}} {{g_i}(t)}&{}{{f_{i,j}}(t)}\\ {{f_{j,i}}(t)}&{}{{g_j}(t)} \end{array}} \right] \ge 0\} \end{aligned}$$

Lemma 4

[47] Given real matrices A, B and D with appropriate dimensions and a scalar \(\varepsilon > 0\), moreover, \({D^\mathrm{T}}D \le I\), for any vectors \(x,y \in {{\mathbb {R}}^n}\), the following inequation holds:

$$\begin{aligned} 2{x^\mathrm{T}}ADBy \le {\varepsilon ^{ - 1}}{x^\mathrm{T}}A{A^\mathrm{T}}x + \varepsilon {y^\mathrm{T}}{B^\mathrm{T}}By \end{aligned}$$

Lemma 5

[48] For any matrix \(\mathcal{M} \in {{\mathbb {R}}^{n \times n}}\), \(\mathcal{M} = {\mathcal{M}^\mathrm{T}} > 0\), the integrable function \(\dot{\omega }(x)\) in \([a,b]\rightarrow {{\mathbb {R}}^n}\) satisfies:

$$\begin{aligned} \int _a^b {{{\dot{\omega }}^\mathrm{T}}(x)\mathcal{M}\dot{\omega }(x)\mathrm{d}x} \ge \frac{1}{{b - a}}\dot{r}_0^\mathrm{T}\mathcal{M}{\dot{r}_0} + \frac{3}{{b - a}}\dot{r}_1^\mathrm{T}\mathcal{M}{\dot{r}_1} \end{aligned}$$

where

$$\begin{aligned} {r_0} = \int _a^b {\omega (x)\mathrm{d}x} ,\ \ {\dot{r}_1} = \omega (b) + \omega (a) - \frac{2}{{b - a}}{r_0}. \end{aligned}$$

1.2 Appendix 2: Proof of Theorem 1

For the purpose of simplicity, the vector notations are denoted as follows:

$$\begin{aligned} &{\varpi _1} = \frac{1}{{\varsigma (t)}}\int _{t - \varsigma (t)}^t {{y_\mu }(z,x)} \mathrm{d}x,\\&{\varpi _2} = \frac{1}{{\varsigma (t)}}\int _{t - \varsigma (t) - \eta (t) }^{t - \eta (t) } {{y_\mu }(z,x)} \mathrm{d}x,\\&{\psi _1} = {y_\mu } - {y^\varsigma _\mu },\\&{\psi _2} = {y_\mu }+ {y^\varsigma _\mu } - 2{\varpi _1},\\&{\psi _3} = {y_{\mu \eta }}- {y^\varsigma _{\mu \eta } },\\&{\psi _4} = {y_{\mu \eta } } + {y^\varsigma _{\mu \eta }} - 2{\varpi _2},\ \ \chi _{1\mu }^\mathrm{T} = [y_\mu ^\mathrm{T}, {\tilde{f}^\mathrm{T}}({W_\alpha }{y_\mu })],\ \ \\&\chi _{2\mu }^\mathrm{T} = \{ {[\varsigma (t){\varpi _1}]^\mathrm{T}}, {[{y_\mu } - {y^\varsigma _\mu }]^\mathrm{T}}\},\ \ \\&\chi _{3\mu }^\mathrm{T}= \{ {[\varsigma (t){\varpi _2}]^\mathrm{T}},\ \ {[{y_{\mu \eta }} - {y^\varsigma _{\mu \eta }}]^\mathrm{T}}\}. \end{aligned}$$

We choose the following LKF candidate:

$$\begin{aligned} V({y_\mu },t) = \sum \limits _{\imath = 1}^9 {{V_\imath }({y_\mu },t)} \end{aligned}$$
(17)

where

$$\begin{aligned} {V_1}({y_\mu },t) =&\, \int _\Omega {\sum \limits _{\mu = 1}^n \left\{ y_\mu ^\mathrm{T}{\mathcal{P}_\alpha }{y_\mu } + {\gamma _2}\sum \limits _{k = 1}^q{{(\frac{{\partial {y_\mu }}}{{\partial {z_k}}})}^\mathrm{T}} {\mathcal{E}_k}\Gamma (\frac{{\partial {y_\mu }}}{{\partial {z_k}}}) \right\} } \mathrm{d}z\\ {V_2}({y_\mu },t) =&\, \int _\Omega \sum \limits _{\mu = 1}^n \left\{ \int _{t - h(t)}^t {\chi _{1\mu }^\mathrm{T}(z,x){\mathcal{Q}_1}{\chi _{1\mu }}(z,x)\mathrm{d}x}\right. \\&\left. + \int _{t - {h_1}}^t {\chi _{1\mu }^\mathrm{T}(z,x){\mathcal{Q}_2}{\chi _{1\mu }}(z,x)\mathrm{d}x} \right\} \mathrm{d}z \\&+ \int _\Omega {\sum \limits _{\mu = 1}^n \left\{ \int _{t - {h_2}}^t {\chi _{1\mu }^\mathrm{T}(z,x){\mathcal{Q}_3}{\chi _{1\mu }}(z,x)\mathrm{d}x}\right\} } \mathrm{d}z\\ {V_3}({y_\mu },t) =&\, \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ (\varsigma _m - \varsigma (t))\chi _{2\mu }^\mathrm{T}{\mathcal{W}_1}{\chi _{2\mu }}\right\} } } \mathrm{d}z\\ {V_4}({y_\mu },t) =&\, \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ (\varsigma { _m} - \varsigma (t))\chi _{3\mu }^\mathrm{T}{\mathcal{W}_2}{\chi _{3\mu }}\right\} } } \mathrm{d}z\\ {V_5}({y_\mu },t) =&\, \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ \int _{t - \eta (t) }^t {y_\mu ^\mathrm{T}(z,x){\mathcal{H}_1}{y_\mu }(z,x)\mathrm{d}x }\right\} } } \mathrm{d}z \\&+ \int _\Omega \sum \limits _{\mu = 1}^n {\left\{ \int _{t - \eta (t) }^t {\dot{y}_\mu ^\mathrm{T}(z,x){\mathcal{H}_2}{{\dot{y}}_\mu }(z,x)\mathrm{d}x }\right\} } \mathrm{d}z\\ {V_6}({y_\mu },t) =&\, \int _\Omega \sum \limits _{\mu = 1}^n \left\{ {\varsigma ^2}\int _{t - \varsigma (t)}^t \dot{y}_\mu ^\mathrm{T}(z,x){\mathcal{U}_1}{{\dot{y}}_\mu }(z,x)\mathrm{d}x \right. \\&\left. - \varsigma (t)\psi _1^\mathrm{T}{\mathcal{U}_1}{\psi _1} - 3\varsigma (t)\psi _2^\mathrm{T}{\mathcal{U}_1}{\psi _2} \right\} \mathrm{d}z\\ {V_7}({y_\mu },t) =&\, \int _\Omega \sum \limits _{\mu = 1}^n \left\{ {\varsigma ^2}\int _{t - \varsigma (t) - \eta (t) }^{t - \eta (t) } \dot{y}_\mu ^\mathrm{T}(z,x){\mathcal{U}_2}{{\dot{y}}_\mu }(z,x)\mathrm{d}x \right. \\&\left. - \varsigma (t)\psi _3^\mathrm{T}{\mathcal{U}_2}{\psi _3} { - 3\varsigma (t)\psi _4^\mathrm{T}{\mathcal{U}_2}{\psi _4}} \right\} \mathrm{d}z\\ {V_8}({y_\mu },t) =&\, \int _\Omega \sum \limits _{\mu = 1}^n \left. \left\{ ({h_2} - {h_1})\int _{ - {h_2}}^{ - {h_1}} \int _{t + \theta }^t {y_\mu ^\mathrm{T}(z,x){\mathcal{Z}_1}{y_\mu }(z,x)} \mathrm{d}x\mathrm{d}\theta \right. \right. \\&\left. + \varsigma \int _{ - \varsigma }^0 {\int _{t + \theta }^t {y_\mu ^\mathrm{T}(z,x){\mathcal{Z}_2}{y_\mu }(z,x)} \mathrm{d}x\mathrm{d}\theta } \right\} \mathrm{d}z\\ {V_9}({y_\mu },t) =&\, \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ ({\varsigma _m} - \varsigma (t))\varsigma (t)[\varpi _1^\mathrm{T},\varpi _2^\mathrm{T}]\mathcal{I}\left[ {\begin{array}{*{20}{c}} {{\varpi _1}}\\ {{\varpi _2}} \end{array}} \right] \right\} } } \mathrm{d}z \end{aligned}$$

Then, it can be deduced that for each \(\alpha \in \mathcal{S}\),

$$\begin{aligned} \mathcal{L}V({y_\mu },t) = \sum \limits _{\imath = 1}^9 {\mathcal{L}{V_\imath }({y_\mu },t)} \end{aligned}$$
(18)

where

$$\begin{aligned}\mathcal{L}{V_1}({y_\mu },t) =& \int _\Omega \sum \limits _{\mu = 1}^n \left\{ 2y_\mu ^\mathrm{T}{\mathcal{P}_\alpha }{{\dot{y}}_\mu } + \sum \limits _{\beta \in \mathcal{S}} {y_\mu ^\mathrm{T}{\varphi _{\alpha \beta }}{\mathcal{P}_\beta }} {y_\mu } \right. \\&\quad \left. + 2 {\gamma _2}\sum \limits _{k = 1}^q {{{\left(\frac{{\partial {y_\mu }}}{{\partial {z_k}}}\right)}^\mathrm{T}}{\mathcal{E}_k}\Gamma \left(\frac{{\partial {{\dot{y}}_\mu }}}{{\partial {z_k}}}\right)} \right\} \mathrm{d}z\\ \mathcal{L}{V_2}({y_\mu },t) &\le \int _\Omega {\sum \limits _{\mu = 1}^n {\left. {\left\{ {\chi _{1\mu }^\mathrm{T}({\mathcal{Q}_1} + {\mathcal{Q}_2} + {\mathcal{Q}_3}){\chi _{1\mu }}} \right. } \right\} } \mathrm{d}z} \\&\quad - (1 - h)\int _\Omega {\sum \limits _{\mu = 1}^n {\left. {\left\{ {\chi _{1\mu h }^\mathrm{T}{\mathcal{Q}_1}{\chi _{1\mu h}}} \right. } \right\} } } \mathrm{d}z\\&\quad - \int _\Omega {\sum \limits _{\mu = 1}^n {\left. {\left\{ {\chi _{1\mu h_1 }^\mathrm{T}{\mathcal{Q}_2}{\chi _{1\mu h_1 }}} \right. } \right\} } \mathrm{d}z} \\&\quad - \int _\Omega {\sum \limits _{\mu = 1}^n {\left. {\left\{ {\chi _{1\mu h_2 }^\mathrm{T}{\mathcal{Q}_3}{\chi _{1\mu h_2}}} \right. } \right\} } \mathrm{d}z}\\ \mathcal{L}{V_3}({y_\mu },t) &= - \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ \chi _{2\mu }^\mathrm{T}{\mathcal{W}_1}{\chi _{2\mu }}\right\} } } \mathrm{d}z \\&\quad + 2\int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ (\varsigma { _m} - \varsigma (t))\chi _{2\mu }^\mathrm{T}{\mathcal{W}_1}\left[ \begin{array}{l} {y_\mu }\\ {{\dot{y}}_\mu } \end{array} \right] \right\} }} \mathrm{d}z\\ \mathcal{L}{V_4}({y_\mu },t) &= - \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ \chi _{3\mu }^\mathrm{T}{\mathcal{W}_2}{\chi _{3\mu }}\right\} } } \mathrm{d}z\\&\quad + 2\int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ (\varsigma { _m} - \varsigma (t))\chi _{3\mu }^\mathrm{T}{\mathcal{W}_2}\left[ \begin{array}{l} {y_{\mu \eta }}\\ {{\dot{y}}_{\mu \eta }} \end{array} \right] \right\} } } \mathrm{d}z\\ \mathcal{L}{V_5}({y_\mu },t) &= \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ y_\mu ^\mathrm{T}{\mathcal{H}_1}{y_\mu } - y_{\mu \eta } ^\mathrm{T}{\mathcal{H}_1}{y_\mu \eta }\right\} } } \mathrm{d}z\\&\quad + \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ \dot{y}_\mu ^\mathrm{T}{\mathcal{H}_2}{{\dot{y}}_\mu } - \dot{y}_{\mu \eta }^\mathrm{T}{\mathcal{H}_2}{{\dot{y}}_{\mu \eta }}\right\} } } \mathrm{d}z \\ \mathcal{L}{V_6}({y_\mu },t) &= \int _\Omega \sum \limits _{\mu = 1}^n \left\{ {\varsigma ^2}\dot{y}_\mu ^\mathrm{T}{\mathcal{U}_1}{{\dot{y}}_\mu } - \psi _1^\mathrm{T}{\mathcal{U}_1}{\psi _1} \right. \\&- 2\varsigma (t)\psi _1^\mathrm{T}{\mathcal{U}_1}{{\dot{y}}_\mu } - 3\psi _2^\mathrm{T}{\mathcal{U}_1}{\psi _2} \\&\left. - 6\psi _2^\mathrm{T}{\mathcal{U}_1}[\varsigma (t){{\dot{y}}_\mu } + 2{\varpi _1} - 2{y_\mu }]\right\} \mathrm{d}z\\ \mathcal{L}{V_7}({y_\mu },t) &= \int _\Omega \sum \limits _{\mu = 1}^n \left\{ {\varsigma ^2}\dot{y}_{\mu \eta } ^\mathrm{T}{\mathcal{U}_2}{{\dot{y}}_{\mu \eta } } \right. \\&\quad \left. - \psi _3^\mathrm{T}{\mathcal{U}_2}{\psi _3} - 2\varsigma (t)\psi _3^\mathrm{T}{\mathcal{U}_2}{{\dot{y}}_{\mu \eta } } - 3\psi _4^\mathrm{T}{\mathcal{U}_2}{\psi _4} \right. \\&\left. - 6\psi _4^\mathrm{T}{\mathcal{U}_2}[\varsigma (t){{\dot{y}}_{\mu \eta } } + 2{\varpi _2} - 2{y_{\mu \eta }}]\right\} \mathrm{d}z\\ \mathcal{L}{V_8}({y_\mu },t) &= \int _\Omega \sum \limits _{\mu = 1}^n \left\{ {{({h_2} - {h_1})}^2}y_\mu ^\mathrm{T}{\mathcal{Z}_1}{y_\mu } - ({h_2} - {h_1})\right. \\&\left. \int _{t - {h_2}}^{t - {h_1}} {y_\mu ^\mathrm{T}(z,x){\mathcal{Z}_1}{y_\mu }(z,x)\mathrm{d}x} \right. \\&\left. + {\varsigma ^2}y_\mu ^\mathrm{T}{\mathcal{Z}_2}{y_\mu } - \varsigma \int _{t - \varsigma }^t {y_\mu ^\mathrm{T}(z,x){\mathcal{Z}_2}{y_\mu }(z,x)\mathrm{d}x} \right\} \mathrm{d}z \end{aligned}$$

For \({h_1}< h(t) < {h_2}\), the following inequalities can be deduced by employing Lemmas 1 and 3:

$$\begin{aligned} &- ({h_2} - {h_1})\int _{t - {h_2}}^{t - {h_1}} {y_\mu ^\mathrm{T}(z,x){\mathcal{Z}_1}{y_\mu }(z,x)\mathrm{d}x}\\&\quad \le - \frac{{{h_2} - {h_1}}}{{{h_2} - h(t)}}{\left( {\int _{t - {h_2}}^{t - h(t)} {{y_\mu }(z,x)\mathrm{d}x} } \right) ^\mathrm{T}}{\mathcal{Z}_1}\left( {\int _{t - {h_2}}^{t - h(t)} {{y_\mu }(z,x)\mathrm{d}x} } \right) \\&\qquad - \frac{{{h_2} - {h_1}}}{{h(t) - {h_1}}}{\left( {\int _{t - h(t)}^{t - {h_1}} {{y_\mu }(z,x)\mathrm{d}x} } \right) ^\mathrm{T}}{\mathcal{Z}_1}\left( {\int _{t - h(t)}^{t - {h_1}} {{y_\mu }(z,x)\mathrm{d}x} } \right) \\&\quad \le - {\left[ \begin{array}{l} \int _{t - {h_2}}^{t - h(t)} {{y_\mu }(z,x)\mathrm{d}x} \\ \int _{t - h(t)}^{t - {h_1}} {{y_\mu }(z,x)\mathrm{d}x} \end{array} \right] ^\mathrm{T}}\left[ {\begin{array}{*{20}{c}} {{\mathcal{Z}_1}}&{}{{{\tilde{\mathcal{Z}}}_1}}\\ * &{}{{\mathcal{Z}_1}} \end{array}} \right] \left[ \begin{array}{l} \int _{t - {h_2}}^{t - h(t)} {{y_\mu }(z,x)\mathrm{d}x} \\ \int _{t - h(t)}^{t - {h_1}} {{y_\mu }(z,x)\mathrm{d}x} \end{array} \right] \end{aligned}$$
(19)
$$\begin{aligned} &- \varsigma \int _{t - \varsigma }^t {y_\mu ^\mathrm{T}(z,x){\mathcal{Z}_2}{y_\mu }(z,x)\mathrm{d}x} \\&\quad \le - \frac{\varsigma }{{\varsigma - \varsigma (t)}}{\left( {\int _{t - \varsigma }^{t - \varsigma (t)} {{y_\mu }(z,x)\mathrm{d}x} } \right) ^\mathrm{T}}\\&\quad {\mathcal{Z}_2}\left( {\int _{t - \varsigma }^{t - \varsigma (t)} {{y_\mu }(z,x)\mathrm{d}x} } \right) \\&\quad - \frac{\varsigma }{{\varsigma (t)}}{\left( {\int _{t - \varsigma (t)}^t {{y_\mu }(z,x)\mathrm{d}x} } \right) ^\mathrm{T}}{\mathcal{Z}_2}\left( {\int _{t - \varsigma (t)}^t {{y_\mu }(z,x)\mathrm{d}x} } \right) \end{aligned}$$
$$\begin{aligned} \le -&\left[ \begin{array}{l} \int _{t - \varsigma }^{t - \varsigma (t)} {{y_\mu }(z,x)\mathrm{d}x} \\ \int _{t - \varsigma (t)}^t {{y_\mu }(z,x)\mathrm{d}x} \end{array} \right] ^\mathrm{T}&\left[ {\begin{array}{c} {{\mathcal{Z}_2}}{{{\tilde{\mathcal{Z}}}_2}}\\ * {{\mathcal{Z}_2}} \end{array}} \right] \\&\left[ \begin{array}{l} \int _{t - \varsigma }^{t - \varsigma (t)} {{y_\mu }(z,x)\mathrm{d}x} \\ \int _{t - \varsigma (t)}^t {{y_\mu }(z,x)\mathrm{d}x} \end{array} \right] \end{aligned}$$
(20)

where

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {{\mathcal{Z}_1}}&{}{{{\tilde{\mathcal{Z}}}_1}}\\ * &{}{{\mathcal{Z}_1}} \end{array}} \right]> 0, \ \ \left[ {\begin{array}{*{20}{c}} {{\mathcal{Z}_2}}&{}{{{\tilde{\mathcal{Z}}}_2}}\\ * &{}{{\mathcal{Z}_2}} \end{array}} \right] > 0 \end{aligned}$$

In addition,

$$\begin{aligned} \mathcal{L}{V_9}({y_\mu },t) =&\, \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ ({\varsigma _m} - 2\varsigma (t))[\varpi _1^\mathrm{T},\varpi _2^\mathrm{T}]\mathcal{I}\left[ {\begin{array}{c} {{\varpi _1}}\\ {{\varpi _2}} \end{array}} \right] \right\} } }\mathrm{d}z\\&+ \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ 2({\varsigma _m} - \varsigma (t))[\varpi _1^\mathrm{T},\varpi _2^\mathrm{T}]\mathcal{I}\left[ {\begin{array}{c} { - {\varpi _1} + {y_\mu }}\\ { - {\varpi _2} + {y_{\mu \eta }}} \end{array}} \right] \right\} } } \mathrm{d}z \end{aligned}$$

From Assumption 1, we can obtain the following inequations for n-dimensional positive definite diagonal matrices \({\Theta _{o}}(o = 1,2,3)\):

$$\begin{aligned} 0\le & {} 2{[\tilde{f}({W_\alpha }{y_\mu }) - {V_1}{W_\alpha }{y_\mu }]^\mathrm{T}}{\Theta _1}[{V_2}{W_\alpha }{y_\mu }- \tilde{f}({W_\alpha }{y_\mu })] \end{aligned}$$
(21)
$$\begin{aligned} 0\le & {} 2{[\tilde{f}({W_\alpha }{y_{\mu h} }) - {V_1}{W_\alpha }{y_{\mu h} }]^\mathrm{T}}{\Theta _2}[{V_2}{W_\alpha }{y_{\mu h}}- \tilde{f}({W_\alpha }{y_{\mu h}})] \end{aligned}$$
(22)
$$\begin{aligned} 0\le & {} 2\{ \tilde{f}({W_\alpha }{y_\mu }) - \tilde{f}({W_\alpha }{y_{\mu h}}) \nonumber \\&- {V_1}[{W_\alpha }{y_\mu } - {W_\alpha }{y_{\mu h} }]\} ^\mathrm{T} {\Theta _3}\{ {V_2}[{W_\alpha }{y_\mu } - {W_\alpha }{y_{\mu h}}]\nonumber \\&\quad - \tilde{f}({W_\alpha }{y_\mu }) + \tilde{f}({W_\alpha }{y_{\mu h} })\} \end{aligned}$$
(23)

According to the error system (6), one has

$$\begin{aligned} 0&= 2\int _\Omega \sum \limits _{\mu \mathrm{{ = }}1}^n \left\{ \sum \limits _{i = 1}^r\sum \limits _{j = 1}^r {\theta _i}(\xi ){{\theta _j}(\xi ^\varsigma )} [{\gamma _1}y_\mu ^\mathrm{T}\Gamma \right. \nonumber \\&\quad \left. + {\gamma _2}\dot{y}_\mu ^\mathrm{T}\Gamma ] \left\{ - {{\dot{y}}_\mu } + \sum \limits _{k = 1}^q {\frac{\partial }{{\partial {z_k}}}} ({\mathcal{E}_k}\frac{{\partial {y_\mu }}}{{\partial {z_k}}}) - {\mathcal{B}_{i\alpha }}{y_\mu }+ {\mathcal{C}_{i\alpha }}\tilde{f}({W_\alpha }{y_\mu }) \right. \right. \nonumber \\&\quad \left. \left. +{\mathcal{D}_{i\alpha }}\tilde{f}({W_\alpha }{y_{\mu h}})\right. \right. \nonumber \\&\quad \left. \left. + [{K_{1j}} + \Delta {K_{1j}}( t)]{\mathcal{H}_i}_\alpha {y^\varsigma _\mu } + [{K_{2j}} + \Delta {K_{2j}}(t)]{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta } }\right\} \right\} \mathrm{d}z \end{aligned}$$
(24)

Then, using Lemma 4, we get

$$\begin{aligned}&2[{\gamma _1}y_\mu ^\mathrm{T}\Gamma + {\gamma _2}\dot{y}_\mu ^\mathrm{T}\Gamma ][\Delta {K_{1j}}(t){\mathcal{H}_i}_\alpha {y^\varsigma _\mu } + \Delta {K_{2j}}( t){\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta }} ]\\&= 2[{\gamma _1}y_\mu ^\mathrm{T}\Gamma + {\gamma _2}\dot{y}_\mu ^\mathrm{T}\Gamma ]{Q_j}{Y_j}(t)[{\mathcal{N}_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu }+ {\mathcal{N}_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta }} ]\\&\quad \le {\varepsilon ^{ - 1}}[{\gamma _1}y_\mu ^\mathrm{T}\Gamma + {\gamma _2}\dot{y}_\mu ^\mathrm{T}\Gamma ]{Q_j}Q_j^\mathrm{T}{[{\gamma _1}y_\mu ^\mathrm{T} \Gamma + {\gamma _2}\dot{y}_\mu ^\mathrm{T}\Gamma ]^\mathrm{T}} \\&\quad + \varepsilon [{\mathcal{N}_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu } + {\mathcal{N}_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta }} {]^\mathrm{T}}[{\mathcal{N}_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu } + {\mathcal{N}_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta }} ] \end{aligned}$$

Combining (24) and Lemma 2, one can easily derive that

$$\begin{aligned} \mathcal{L}{V_1}({y_\mu }&,t)\le 2\int _\Omega {\sum \limits _{\mu \mathrm{{ = }}1}^n {{\Upsilon _\mu }\mathrm{d}z} } \nonumber \\&+ \int _\Omega {\sum \limits _{\mu = 1}^n {\left\{ {\left. {\sum \limits _{\beta \in \mathcal{S}} {y_\mu ^\mathrm{T}{\varphi _{\alpha \beta }}{\mathcal{P}_\beta }} {y_\mu }} \right\} } \right. } } \mathrm{d}z \nonumber \\&+ \int _\Omega {\sum \limits _{\mu \mathrm{{ = }}1}^n {\left\{ {\sum \limits _{i = 1}^r {\sum \limits _{j = 1}^r {{\theta _i}(\xi ){\theta _j}(\xi ^\varsigma )\varepsilon [{\mathcal{N}_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu }} } } \right. } }\nonumber \\&+ {\mathcal{N}_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta } } ]^\mathrm{T}\nonumber \\&\times [{\mathcal{N}_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu }\left. { + {\mathcal{N}_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta } } ]} \right\} \mathrm{d}z\nonumber \\&+ \int _\Omega \sum \limits _{\mu = 1}^n \left\{ {\sum \limits _{j = 1}^r {{\theta _j}(\xi ^\varsigma )} } [{\varepsilon ^{ - 1}}\left( {\gamma _1}y_\mu ^\mathrm{T}\Gamma \right. \right. \nonumber \\&\left. \left. + {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma \right) {Q_j}Q_j^\mathrm{T}{{({\gamma _1}y_\mu ^\mathrm{T} \Gamma + {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma )}^\mathrm{T}}] \right\} \mathrm{d}z \end{aligned}$$
(25)

where

$$\begin{aligned} {\Upsilon _\mu } =&\,\sum \limits _{i = 1}^r\sum \limits _{j = 1}^r{{\theta _i}(\xi ) {{\theta _j}(\xi ^\varsigma )} [y_\mu ^\mathrm{T}{\mathcal{P}_\alpha }{{\dot{y}}_\mu }} - {\gamma _1}y_\mu ^\mathrm{T}\Gamma {{\dot{y}}_\mu }- {\gamma _1}y_\mu ^\mathrm{T}\Gamma \tilde{\mathcal{E}}{y_\mu } \\&- {\gamma _1}y_\mu ^\mathrm{T}\Gamma {\mathcal{B}_{i\alpha }}{y_\mu } + {\gamma _1}y_\mu ^\mathrm{T}\Gamma {\mathcal{C}_{i\alpha }}\tilde{f}({W_\alpha }{y_\mu })\\&+ {\gamma _1}y_\mu ^\mathrm{T}\Gamma {\mathcal{D}_{i\alpha }}\tilde{f}({W_\alpha }{y_{\mu h}})\\&+ {\gamma _1}y_\mu ^\mathrm{T}\Gamma {K_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu }+ {\gamma _1}y_\mu ^\mathrm{T}\Gamma {K_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta }} \\&- {\gamma _2}\dot{y}_\mu ^\mathrm{T}\Gamma {{\dot{y}}_\mu }- {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma {\mathcal{B}_{i\alpha }}{y_\mu } + {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma {\mathcal{C}_{i\alpha }}\tilde{f}({W_\alpha }{y_\mu })\\&+ {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma {\mathcal{D}_{i\alpha }}\tilde{f}({W_\alpha }{y_{\mu h} }) \\&+ {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma {K_{1j}}{\mathcal{H}_i}_\alpha {y^\varsigma _\mu } + {\gamma _2}\dot{y}_\mu ^\mathrm{T} \Gamma {K_{2j}}{\mathcal{H}_i}_\alpha {y^\varsigma _{\mu \eta } }],\\ \tilde{\mathcal{E}} =&\, \mathrm{diag}\left\{ \sum \limits _{k = 1}^q {\frac{{{{\varepsilon }_{1k}}}}{{l_k^2}}} ,\sum \limits _{k = 1}^q {\frac{{{{\varepsilon }_{2k}}}}{{l_k^2}}} ,\ldots ,\sum \limits _{k = 1}^q {\frac{{{{\varepsilon }_{nk}}}}{{l_k^2}}} \right\} , \end{aligned}$$

and \({l_k} > 0\) are given scalars.

Let \({\hat{K}_{1j}} = \Gamma {K_{1j}}, {\hat{K}_{2j}} = \Gamma {K_{2j}}\), then, for \({t_m} \le t < {t_{m + 1}}\), by combining (18)–(25), we get

$$\begin{aligned} \mathcal{L}V({y_\mu },t)&\le \int _\Omega \sum \limits _{\mu = 1}^n \left\{ \sum \limits _{i = 1}^r\sum \limits _{j = 1}^r {{\theta _i}(\xi ){{\theta _j}(\xi ^\varsigma )} } {\nabla ^\mathrm{T}}\left[ \frac{{{\varsigma _m} - \varsigma (t)}}{{{\varsigma _m}}}{\mathcal{S}_{1ij}} \right. \right. \\&\quad\left. \left. + \frac{{\varsigma (t)}}{{{\varsigma _m}}}{\mathcal{S}_{2ij}} \right] \nabla \right\} \mathrm{d}z, \end{aligned}$$

where

$$\begin{aligned} {{\mathcal{S}}_{1ij}} =&\, {\bar{\Sigma }_{1ij}} + {\varsigma _m}{\Sigma _2},\ \ {\mathcal{S}_{2ij}} = {\bar{\Sigma }_{1ij}} + {\varsigma _m}{\Sigma _3},\\ {\bar{\Sigma }_{1ij}} =&\, {\Sigma _{1ij}} + {\varepsilon ^{ - 1}}({\gamma _1}\upsilon _1^\mathrm{T}\Gamma + {\gamma _2}\upsilon _2^\mathrm{T}\Gamma ){Q_j}Q_j^\mathrm{T}{({\gamma _1}\upsilon _1^\mathrm{T}\Gamma + {\gamma _2}\upsilon _2^\mathrm{T}\Gamma )^\mathrm{T}}, \end{aligned}$$

and \({\Sigma _{1ij}}\), \({\Sigma _2}\), \({\Sigma _3}\) have been defined in (8) and (9). It is obvious that if (8)–(11) hold, \({\mathcal{S}_{1ij}} = {\bar{\Sigma }_{1ij}} + {\varsigma _m}{\Sigma _2} < 0\) and \({\mathcal{S}_{2ij}} = {\bar{\Sigma }_{1ij}} + {\varsigma _m}{\Sigma _3} < 0\), then \(\mathcal{L}V({y_\mu },t) < 0\). As a result, the error system (6) is asymptotically stable.

Additionally, \({K_{1j}} = {\Gamma ^{ - 1}}{\hat{K}_{1j}}, \, {K_{2j}} = {\Gamma ^{ - 1}}{\hat{K}_{2j}}\). This completes the proof. \(\square\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, X., Man, J., Song, S. et al. State estimation of T–S fuzzy Markovian generalized neural networks with reaction–diffusion terms: a time-varying nonfragile proportional retarded sampled-data control scheme. Neural Comput & Applic 32, 14639–14653 (2020). https://doi.org/10.1007/s00521-020-04817-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04817-7

Keywords

Navigation