Properties and computation of continuous-time solutions to linear systems

https://doi.org/10.1016/j.amc.2021.126242Get rights and content

Highlights

  • A novel method for finding the AT,S(2)-inverse solution to a given linear system under some constraints is defined.

  • Various dynamical systems for computing AT,S(2)-inverse solution are considered.

  • Main properties of the AT,S(2)-inverse solution as well as the convergence results are claimed.

  • Properties of proposed Zhang neural network (ZNN) models for solving time-varying linear systems are considered.

  • Numerical examples are presented.

Abstract

We investigate solutions to the system of linear equations (SoLE) in both the time-varying and time-invariant cases, using both gradient neural network (GNN) and Zhang neural network (ZNN) designs. Two major limitations should be overcome. The first limitation is the inapplicability of GNN models in time-varying environment, while the second constraint is the possibility of using the ZNN design only under the presence of invertible coefficient matrix. In this paper, by overcoming the possible limitations, we suggest, in all possible cases, a suitable solution for a consistent or inconsistent linear system. Convergence properties are investigated as well as exact solutions.

Introduction

According to the traditional notation, Cm×n (resp. Rm×n) indicate m×n complex (resp. real) matrices. Further, rank(A), A*, R(A) and N(A) denote the rank, the conjugate transpose, the range (column space) and the null space of ACm×n. The index of ACn×n is the minimal k determined by rank(Ak)=rank(Ak+1) and termed as ind(A).

About the notation and main properties of generalized inverses, we suggest monographs [2], [30], [42]. The Drazin inverse of ACn×n is the unique ADCn×n which fulfills the matrix equationsAk+1X=Ak,XAX=X,AX=XA,k=ind(A).The group inverse A# coincides with AD in the case ind(A)=1. The Moore-Penrose (M.P.) inverse of ACm×n is the unique ACn×m satisfyingAXA=A,XAX=X,(AX)*=AX,(XA)*=XA.

A matrix XCn×m which fulfilsXAX=X,R(X)=T,N(X)=S,is an outer inverse of A with predefined range T and null space S, and termed as AT,S(2).

The M.P. inverse A, the weighted M.P. inverse AM,N, the Drazin inverse AD and the group inverse A# are particular outer inverses AT,S(2):A=AR(A*),N(A*)(2),AM,N=AR(A),N(A)(2),where M,N are positive definite and A=N1A*M. The next statements are fulfilled with ACn×n (see [2], [30]):AD=AR(Ak),N(Ak)(2),A#=AR(A),N(A)(2),k=ind(A).

Applications of generalized inverses have been investigated in numerous studies. The Drazin inverse has been used in finite Markov chains, in the study of differential and singular linear difference equations [3], cryptography [18] etc. Also, generalized inverses show useful properties in solving system of linear equations (SoLE). It is a common fact that the minimum-norm least-squares solution of inconsistent linear equations Ax=b is given by the M.P. solution Ab (see [24]). This fact caused a renaissance in studying generalized inverses. The correlation between the generalized inverses and least squares solution is established via the well known fact that Axb is smallest if and only if x=A(1,3)b, where A(1,3)A{1,3}. Later, this extremely important property was generalized to another types of generalized inverses. The minimum-norm (N) least-squares (M) solution to inconsistent system Ax=b is given by the weighted M.P. inverse solution AM,Nb [7]. The unique solution to the restricted linear equations Ax=b in the case xR(Ak), k=ind(A) is the Drazin inverse solution ADb (see [2]). Moreover, the unique solution to the restricted linear equations WAWx=b is Ad,Wb, where Ad,W is the weighted Drazin inverse, xR(AW)k1, b(AW)k2, k1=ind(AW) and k2=ind(WA) (see [39]). In the most general case, AT,S(2)b is the solution to the restricted linear equations Ax=b, xT, bAT [5, Lemma 3.1], [6].

In all aspects of optimization and numerical analysis, the M.P. inverse solution to a linear system is indeed an important tool, which is commonly used in many practical fields, such as: image processing, economy, medicine etc. Later, the authors in [23] generalized the idea on Hilbert spaces and showed how it can be applied in order to compute {1,3}-inverse or the M.P. inverse.

In the following, we restate results from [44] that establish Drazin-inverse solution’s minimal properties.

Theorem 1.1

[44] Suppose that ARn×n with matrix index equal to p. The unique solution in R(Ap) ofAp+1x=Apbis ADb.

Theorem 1.2

[43,44] Suppose that ACn×n, bCn with matrix index equal to p. All solutions of (1.3) are given byx=ADb+N(Ap).

Since the linear system (1.3) is analogous to the normal equationA*Ax=A*b,we will call it the normal generalized equations ofAx=b,bR(Ak),k=ind(A).The vector ADb is known as the Drazin-inverse solution to the system (1.6). Since the Drazin-inverse provides the system with a solution, the system (1.6) can be considered as Drazin-consistent system.

To that goal, applications of generalized inverses in solving linear systems have often been studied. The method of conjugate gradients was applied in computing the Drazin inverse solution of inconsistent linear system in the case when A is Hermitian positive semidefinite [15]. Various semi-iterative methods for solving inconsistent linear systems were proposed in [9], [13] Moreover, a number of Krylov subspace methods for solving linear systems were considered in [1], [10], [11], [15], [17], [26], [27], [53]. A unified framework for Krylov subspace methods for arbitrary linear system was given in [25]. In addition, the Drazin inverse solution can be obtained using index splitting methods [40] as well as the extended Cramer rule [29]. The determinantal representation of AT,S(2)b was investigated in [46].

In the present paper, it is our purpose to investigate the properties of the AT,S(2) inverse, in particular the AT,S(2)-inverse solution to a consistent or inconsistent linear system.

The general structure of sections is as follows. Next Section 2 gives some preliminaries about the continuous-time approach in solving time-invariant linear systems (TILS) and time-varying linear systems (TVLS). The third section is aimed at the definition of the new method for finding the AT,S(2)-inverse solution to a given linear system under some constraints. Various dynamical systems for computing AT,S(2)-inverse solution are considered in Section 4. Main properties of the AT,S(2)-inverse solution as well as the convergence result for our method are claimed. Properties of Zhang neural network (ZNN) models for finding AT,S(2)-inverse solution of time-varying linear systems are considered in Section 5. Simulation examples are reported in the last Section 6.

Section snippets

Dynamical systems approach in solving linear systems

Our goal is to investigate application of generalized inverses in solving the TILSAx=b,ARm×n,xRn,bRm.

The parallel-processing computational gradient neural network (GNN) model for solving (2.1) where A is regular, was proposed in [31]. The model is also known as Wang neural network (WNN) model. It is necessary to consider the error function in vector form e(t)=Av(t)b. and the scalar error-monitoring goal functionε(t)=12Av(t)bF2,where ·F denotes the Frobeinus norm. Now, the GNN dynamic

Minimal properties of AT,S(2)-inverse solution

We consider ACm×n and RCn×m which is generated by the requestrank(AR)=rank(RA)=rank(R),AR(R)N(R)=Cm.

Lemma 3.1

Let ACrm×n and RCsn×m be generated according to conditions (3.1). Then the following statements can be stated:

(a) The TILSRAx=Rb,xCn,bCmis always consistent.

(b) The solution set of (3.2) is defined byx=AR(R),N(R)(2)b+N(RA).

Proof

(a) Since RbR(R)=R(RA), the TILS (3.2) is always consistent [21].

(b) The solution set of homogeneous SoLE RAx=0 is equal to N(RA). In addition, sinceRAAR(R),N(R)(2)=RPR(

Dynamical systems for computing AT,S(2)-inverse solution

This section is aimed to investigating various dynamical systems for computing AT,S(2) -inverse solution. Section 4 applicability of AT,S(2)-inverse solution is justified by theoretical results given in Section 3. Moreover, such solutions are the only possible approach in solving singular rectangular SoLE.

ZNN Models for solving TVLS

In this section we intend to apply the ZNN design to solve time-varying linear systems, both regular and singular, both consistent and inconsistent.

Numerical results

In order to demonstrate the efficiency of the proposed models, we will present several simulation examples in this section. Simulation are performed on the developed GNN and ZNN models, both in the time-invariant and in the time-varying case.

Conclusion

Our general scope is to solve the general singular or rectangular TVLS or TILS A(t)x(t)=b(t), using both GNN or ZNN models.

  • TILS case, i.e., A(t)=A and b(t)=b. a) A is an n×n singular matrix. Then we have two directions:

    i) Usage of ZNN models is not possible, since they can only be applied to nonsingular systems. But, we can use the Tikhonov regularization of the system first, and then we can use the ZNN models in the regularized linear system.

    ii) We can use GNN models.

    Keep in mind that in order

References (53)

  • Y. Wei

    Index splitting for the drazin inverse and the singular linear system

    Appl. Math. Comput.

    (1998)
  • Y. Wei et al.

    Additional results on index splittings for drazin inverse solutions of singular linear systems

    Electron. J. Linear Algebra

    (2001)
  • Y. Yu et al.

    Determinantal representation of the generalized inverse aT,S(2) over integral domains and its applications

    Linear and Multilinear Algebra

    (2009)
  • Y. Zhang et al.

    Zhang neural networks and neural-Dynamic method

    (2011)
  • Y. Zhang et al.

    Global exponential convergence and stability of gradient-based neural network for online matrix inversion

    Appl. Math. Comput.

    (2009)
  • J. Zhou et al.

    A two-step algorithm for solving singular linear systems with index one

    Appl. Math. Comput.

    (2001)
  • W.E. Arnoldi

    The principle of minimized iterations in the solution of the matrix eigenvalue problem

    Quart. Appl. Math.

    (1951)
  • A. Ben-Israel et al.

    Generalized inverses: Theory and applications, second ed

    (2003)
  • S.L. Campbell et al.

    Generalized inverse of linear transformation

    (1979)
  • K. Chen

    Implicit dynamic system for online simultaneous linear equations solving

    Electron Lett

    (2013)
  • Y.L. Chen

    Finite algorithms for the (2)-generalized inverse aTS(2)

    Linear Multilinear Algebra

    (1995)
  • Y.L. Chen

    A cramer rule for solution of the general restricted linear equation

    Linear Multilinear Algebra

    (1993)
  • J.S. Chipman

    On least-squares with insufficient observations

    J. Amer. Statist. Assoc.

    (1964)
  • A. Cichocki et al.

    Neural networks for solving systems of linear equations and related problems

    IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications

    (1992)
  • S.C. Eisenstat et al.

    Variational iterative methods for nonsymmetric systems of linear equations

    SIAM J. Numer. Anal.

    (1983)
  • B. Fischer et al.

    A note on conjugate gradient type methods for indefinite and/or inconsistent linear systems

    Numer. Algorithms

    (1996)
  • Cited by (0)

    1

    Predrag Stanimirović is supported from the Ministry of Education, Science and Technological Development, Republic of Serbia, Grant No. 174013.

    2

    Dijana Mosić is supported by the Ministry of Education, Science and Technological Development, Republic of Serbia, Grant No. 174007.

    View full text