Next Article in Journal
Solutions to Nonlinear Evolutionary Parabolic Equations of the Diffusion Wave Type
Previous Article in Journal
An Overlapping Community Detection Approach in Ego-Splitting Networks Using Symmetric Nonnegative Matrix Factorization
Previous Article in Special Issue
Fractional Reverse Coposn’s Inequalities via Conformable Calculus on Time Scales
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inversion of Tridiagonal Matrices Using the Dunford-Taylor’s Integral

by
Diego Caratelli
1,2 and
Paolo Emilio Ricci
3,*
1
Department of Research and Development, The Antenna Company, High Tech Campus 29, 5656 AE Eindhoven, The Netherlands
2
Electromagnetics Group, Department of Electrical Engineering, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The Netherlands
3
Section of Mathematics, International Telematic University UniNettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(5), 870; https://doi.org/10.3390/sym13050870
Submission received: 12 April 2021 / Revised: 27 April 2021 / Accepted: 7 May 2021 / Published: 13 May 2021
(This article belongs to the Special Issue Complex Variable in Approximation Theory)

Abstract

:
We show that using Dunford-Taylor’s integral, a classical tool of functional analysis, it is possible to derive an expression for the inverse of a general non-singular complex-valued tridiagonal matrix. The special cases of Jacobi’s symmetric and Toeplitz (in particular symmetric Toeplitz) matrices are included. The proposed method does not require the knowledge of the matrix eigenvalues and relies only on the relevant invariants which are determined, in a computationally effective way, by means of a dedicated recursive procedure. The considered technique has been validated through several test cases with the aid of the computer algebra program Mathematica©.
MSC:
15A09; 15A60; 65F60; 47A10; 65F05

1. Introduction

The inversion of tridiagonal matrices, and in particular of Jacobi’s matrices, has attracted the attention of several researchers in the past and recent times [1,2,3,4,5,6,7,8,9,10,11,12,13], since this appears in numerous problems of both theoretical and applied mathematics. It is well known that Jacobi’s matrix is symmetric and plays a fundamental role in the theory of orthogonal polynomials. The tridiagonal Toeplitz (and the general tridiagonal) matrices appear as a generalization of the symmetric case, so that the recursion formulas, for the relative invariants, are derived by means of a slight modification of the technique used in the symmetric case. Therefore, we can assert that the symmetric case is the basis for the results of the inversion problem considered in this article.
Tridiagonal matrices occur in interpolation problems [14] as well as in the solution of boundary value problems for partial differential equations using finite difference methods [15]. The inversion of the general form as well as of the symmetric case, and some special types, can be found in [1,2,6,12], whereas a review of relevant results (up to 1992) is given in [3].
Computationally efficient techniques for the inversion of tridiagonal matrices have been derived in [7,9]. Several researches have been devoted to the explicit computation of the inverse [1,8,13] by using a variety of methods, often based on the use of recurrence relations.
For our purposes, the procedure proposed by R.A. Usmani in [4] represents a convenient foundational basis. Usmani uses the recursion satisfied by the determinants of the principal minors of a tridiagonal matrix to derive an algorithm for computing the relevant inverse. However, Usmani’s approach is more complicated than the one, based on the Dunford-Taylor’s integral representation, which we are presenting in this research study.
We have used the matrices considered in many of the scientific papers referenced above to validate the correctness of our methodology.
To the authors’ knowledge, functional analysis techniques have been never applied to this particular framework, until now. Actually, the aforementioned mathematical instruments can be broadly adopted in various problems of matrix theory, as it has been recently demonstrated in different studies published by the same authors [16,17].
The use of complex variable methods often facilitates the solution of problems in applied mathematics and physics. In this respect, Jacques Hadamard used to state that: the shortest path between two truths in the real domain passes through the complex domain [18].

2. Preamble

The Dunford-Taylor’s (shortly D-T’s) integral [19] is a basic tool of functional analysis and represents an analogous of the Cauchy’s integral formula in function theory. It traces back to F. Riesz [20] and L. Fantappiè [21].
In the finite dimensional case, operators are represented by matrices. Let A be a matrix with the characteristic polynomial P ( λ ) = k = 0 n u k λ n k (where u 0 : = 1 ) and f be a holomorphic function in a domain Δ C which contains all the eigenvalues λ h of A . The coefficients u k of the characteristic polynomial P ( λ ) are called the matrix invariants since they are invariant under similarity transformations.
The D-T’s integral writes:
f ( A ) = 1 2 π i γ f ( λ ) ( λ I A ) 1 d λ ,
with ( λ I A ) 1 denoting the resolvent of A , and where γ Δ is a simple closed smooth curve with positive direction enclosing all the λ h in its interior.
In [22] (pp. 93–95), the following representation of the resolvent ( λ I A ) 1 in terms of the invariants of A is proved:
( λ I A ) 1 = 1 P ( λ ) k = 0 n 1 h = 0 n k 1 ( 1 ) h u h λ n k h 1 A k .
Using the equations above, f ( A ) can re-written as:
f ( A ) = 1 2 π i k = 1 n γ f ( λ ) h = 0 k 1 ( 1 ) h u h λ k h 1 P ( λ ) d λ A n k .
Note that, if Δ does not contain the origin, a simple choice of γ is represented by a circle centered at the origin and having radius larger than the spectral radius ρ A of A . The spectral radius ρ A can be easily determined through the Gershgorin circle theorem using, only, the entries of A .
The representation above applies in particular to the case of the matrix exponential exp ( A), showing the uselessness of the relevant definition as exponential power series in A since, as it should be well-known [23], every analytic function of A is nothing but a matrix polynomial, that is f ( A ) P ( A ) where P is the polynomial interpolating the function f on the eigenvalues of A .
Consider now the function f ( λ ) = λ 1 . As this function is holomorphic in the open set C { 0 } , namely in the whole complex plane excluding the origin, the preceding result becomes:
Theorem 1.
If A is a non-singular complex matrix and γ = γ 1 γ 2 is a simple contour enclosing all the zeros of P ( λ ) (where γ 1 , oriented counter-clockwise, encloses all the eigenvalues of A and γ 2 , oriented clockwise, encloses the origin but no eigenvalues of A ), then the inverse of A can be represented as:
A 1 = 1 2 π i k = 1 r γ 1 γ 2 h = 0 k 1 ( 1 ) h u h λ k h 1 λ P ( λ ) d λ A n k .
Remark 1.
Note that the knowledge of the eigenvalues of A when computing the integrals above is not strictly necessary. As a matter of fact, unless we make use of the Cauchy’s residue theorem, the knowledge of the matrix invariants suffices, since we can compute said contour integrals directly by choosing two circles, both centered at the origin, with the radius of γ 1 larger than the spectral radius of A and the radius of γ 2 smaller than the minimum modulus of the eigenvalues of A .
This approach can be more convenient since it does not require the explicit, in general cumbersome, computation of the matrix eigenvalues.
Using the D-T’s integral, we have shown, in preceding articles, how to address different problems, such as the computation of the roots [16] and the inverse [17] of general non-singular complex-valued matrices. In particular, in [17], the proposed procedure has been applied to the solution of basic analytical problems involving linear algebraic equations, as well as initial value problems for linear systems of ordinary differential equations.
Of course, Theorem 1 applies, also, to the particular case of a tridiagonal matrix T n of general order n. The only problem is how to evaluate the invariants of such a matrix in terms of its entries. This calculation can be easily performed using a three-term recurrence relation which generalizes the one proven in [24] in the case of Jacobi’s matrices appearing in the theory of orthogonal polynomials.
By using said recursion, it is possible to use the D-T’s integral in the derivation of the inverse of a general non-singular tridiagonal matrix and, in particular, of Jacobi’s and Toeplitz matrices. The test cases illustrated in the last sections of this research study prove the effectiveness of the considered methodology by recovering, in a simple and uniform way, the results already reported in the articles by Usmani [4], Salkuyeh [25] and El-Mikkawy, Karawia [7].

3. The Invariants of a Tridiagonal Matrix

Consider the n × n tridiagonal complex matrix (with n = 1 , 2 , )
T n = a 1 c 1 0 0 0 0 0 b 1 a 2 c 2 0 0 0 0 0 0 0 0 b n 2 a n 1 c n 1 0 0 0 0 0 b n 1 a n .
In what follows, we assume that the matrix T n is non-singular, that is u n 0 , n .
The relevant characteristic polynomial
P n ( x ) = det ( x I T n ) = k = 0 n ( 1 ) k u ( k , n ) x n k ,
satisfies the three-term recurrence relation:
P 1 = 0 , P 0 = 1 , P n ( x ) = ( x a n ) P n 1 ( x ) b n 1 c n 1 P n 2 ( x ) , ( n 1 ) ,
as it is easily seen by using the Laplace expansion of the determinant with respect to the last column of the general matrix.
With a slight modification of the results in [24], a recursion for the invariants u h , ( h = 1 , 2 , , n ) of T n easily follows [22,23,26].
Theorem 2.
For the term u ( k , n ) appearing in (2), the following recursion holds true:
u ( k , n ) = u ( k , n 1 ) + a n u ( k 1 , n 1 ) b n 1 c n 1 u ( k 2 , n 2 ) ,
with k = 1 , 2 , , n ; n = 1 , 2 , , and where the initial values are given by:
u ( n , n 1 ) = u ( 1 , n 1 ) = u ( 1 , n 2 ) = u ( 2 , n 2 ) = 0 .
Proof. 
The proof follows by substituting the expression (2) into the recursion (4) and, afterwards, by equating the coefficients corresponding to the same powers of x. □

4. Matrix Inversion Using Dunford-Taylor’s Integral

In a recent article [17], using the Dunford-Taylor’s integral [19], (actually due to F. Riesz [20] and L. Fantappiè [21]), we have proved the following theorem:
Theorem 3.
If A is a non-singular complex matrix of order n and γ = γ 1 γ 2 is a simple contour enclosing all the zeros of P n ( λ ) (where γ 1 , oriented counter-clockwise, encircles all the eigenvalues of A and γ 2 , oriented clockwise, encloses the origin but no eigenvalues of A ), then the inverse of A is represented by:
A 1 = 1 2 π i k = 1 n γ 1 γ 2 h = 0 k 1 ( 1 ) h u h λ k h 1 λ P n ( λ ) d λ A n k .
Upon choosing the curves γ 1 and γ 2 with the aid of the results reported in [27] and denoting the integrand in (3) by Φ k = Φ k ( λ ) ( k = 1 , 2 , , n ) , the individual contour integral can be evaluated by means of the Cauchy’s residue theorem [28] as:
γ 1 γ 2 h = 0 k 1 ( 1 ) h u h λ k h 1 λ P n ( λ ) d λ = ( 2 π i ) = 1 n R e s Φ k ( λ ) R e s Φ k ( 0 ) .
As we have recalled in [17,29], the evaluation of A 1 does not necessarily require the use of (6). In fact, by applying the Gershgorin circle theorem, the curves γ 1 and γ 2 can be properly chosen in order to allow a simple direct computation.

4.1. Inversion of a Tridiagonal Matrix

Assuming the matrix invariants to be known on the basis of the recursion (4), we can enunciate the following result:
Theorem 4.
The inverse of the non-singular tridiagonal matrix (1) can be computed using the Dunford-Taylor’s integral (5), where γ 1 is a circle oriented counter-clockwise which encloses all the eigenvalues of T n and γ 2 is a circle oriented clockwise which encloses the origin but no eigenvalues of T n .

4.1.1. The Case of a Tridiagonal Toeplitz Matrix

For tridiagonal Toeplitz matrices, whose entries a, b, c are constant with respect the the relevant indexes, we have the following result:
Theorem 5.
Under the assumptions above for the circles γ 1 and γ 2 , the inverse of a non-singular tridiagonal Toeplitz matrix T n with entries a, b, c is given by the Dunford-Taylor’s integral (5) with the invariants being computed using the following recursion:
u ( k , n ) = u ( k , n 1 ) + a u ( k 1 , n 1 ) b c u ( k 2 , n 2 ) ,
with k = 1 , 2 , , n ; n = 1 , 2 , , and where we assume the same initial values as in Theorem 1.
Equation (5) requires the computation of the powers of the matrix T n .
This is trivial for small values of n whereas, for high matrix orders, the relevant computation can be performed using different algorithms available in the scientific literature such as the ones detailed in [30,31], as well as in [25] for the special case of tridiagonal Toeplitz matrices.

4.1.2. The Case of a Jacobi’s Matrix

In the particular case of Jacobi’s matrices, which play an important role in the theory of orthogonal polynomials [24], we have:
J n = a 1 b 1 0 0 0 0 0 b 1 a 2 b 2 0 0 0 0 0 0 0 0 b n 2 a n 1 b n 1 0 0 0 0 0 b n 1 a n ,
Theorem 6.
Under the assumptions above for the circles γ 1 and γ 2 , the inverse of the non-singular Jacobi’s matrix (8) can be computed using the Dunford-Taylor’s integral (5) with the relevant invariants given by the following three-term recursion:
u ( k , n ) = u ( k , n 1 ) + a n u ( k 1 , n 1 ) b n 1 2 u ( k 2 , n 2 ) ,
with k = 1 , 2 , , n ; n = 1 , 2 , .

5. Numerical Examples

In what follows we show some examples of the computational procedure described in the previous sections. To this end, the computer algebra program Mathematica © has been used while enforcing a 16-digit accuracy.
The examples considered here are taken from previous articles of Usmani [4], Salkuyeh [25] and El-Mikkawy, Karawia [7]. It is thus shown that the developed procedure is extremely effective and computationally efficient. In all cases, the inverse matrix T n 1 , as computed using the proposed approach, verifies the basic property defining the inverse, that is T n T n 1 = I n (the identity matrix of order n), within the machine precision.
However, it is not possible to study the differences in computational complexity with the algorithms presented by the aforementioned authors since the calculations performed in Mathematica cannot be monitored openly, being this program protected by the copyright.

5.1. Inversion of a Tridiagonal Matrix of Third Order

Let us consider the tridiagonal matrix of order n = 3 analyzed in [4]:
A = 2 3 0 1 6 7 0 4 5 .
The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as:
u 1 = 13 , u 2 = 21 , u 3 = 11 .
Therefore, it is not difficult to verify that the corresponding eigenvalues are:
λ 1 11.0000 , λ 2 2.41421 , λ 3 0.414214 .
By using the Dunford-Taylor’s integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained:
A 1 = k = 1 n ξ k A n k ,
with:
ξ 1 0.090909 , ξ 2 1.18182 , ξ 3 1.90909 ,
this leading to the conclusion that:
A 1   0.181818 1.36364 1.90909 0.454545 0.909091 1.27273 0.363636 0.727273 0.818182 .

5.2. Inversion of Tridiagonal Toeplitz Matrix of Fourth Order

Let us consider the tridiagonal Toeplitz matrix of order n = 4 analyzed in [25]:
A = 5 1 0 0 4 5 1 0 0 4 5 1 0 0 4 5 .
The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as:
u 1 = 20 , u 2 = 138 , u 3 = 380 , u 4 = 341 .
Therefore, it is not difficult to verify that the corresponding eigenvalues are:
λ 1 8.23607 , λ 2 6.23607 , λ 3 3.76393 , λ 4 1.76393 .
By using the Dunford-Taylor’s integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained:
A 1 = k = 1 n ξ k A n k ,
with:
ξ 1 0.00293250 , ξ 2 0.0586510 , ξ 3 0.404692 , ξ 4 1.11437 ,
this leading to the conclusion that:
A 1   0.249267 0.0615835 0.0146627 0.00293250 0.246334 0.307918 0.0733137 0.0146627 0.234604 0.293255 0.307918 0.0615835 0.187683 0.234604 0.246334 0.249267 .

5.3. Inversion of Jacobi’s Matrix of Fourth Order

Let us consider the Jacobi symmetric matrix of order n = 4 analyzed in [7]:
A = 1 1 0 0 1 1 2 0 0 2 1 1 0 0 1 2 .
The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as:
u 1 = 5 , u 2 = 3 , u 3 = 10 , u 4 = 8 .
Therefore, it is not difficult to verify that the corresponding eigenvalues are:
λ 1 3.52892 , λ 2 2.00000 , λ 3 1.36147 , λ 4 0.832551 .
By using the Dunford-Taylor’s integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained:
A 1 = k = 1 n ξ k A n k ,
with:
ξ 1 0.125000 , ξ 2 0.625000 , ξ 3 0.375000 , ξ 4 1.25000 ,
this leading to the conclusion that:
A 1 0.875000 0.125000 0.500000 0.250000 0.125000 0.125000 0.500000 0.250000 0.500000 0.500000 0.000000 0.000000 0.250000 0.250000 0.000000 0.500000 .

5.4. Inversion of Jacobi’s Matrix of Fifth Order

Let us consider the Jacobi symmetric matrix of order n = 5 analyzed in [4]:
A = 5 1 0 0 0 1 4 2 0 0 0 2 3 4 0 0 0 4 2 3 0 0 0 3 1 .
The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as:
u 1 = 15 , u 2 = 55 , u 3 = 81 , u 4 = 631 , u 5 = 563 .
Therefore, it is not difficult to verify that the corresponding eigenvalues are:
λ 1 7.77275 , λ 2 5.50180 , λ 3 3.65809 , λ 4 3.09534 , λ 5 1.16270 .
By using the Dunford-Taylor’s integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained:
A 1 = k = 1 n ξ k A n k ,
with:
ξ 1 0.00177610 , ξ 2 0.0266429 , ξ 3 0.0976909 , ξ 4 0.143872 , ξ 5 1.12078 ,
this leading to the conclusion that:
A 1 0.213144 0.0657193 0.0248667 0.0142095 0.0426287 0.0657193 0.328597 0.124334 0.0710479 0.213144 0.0248667 0.124334 0.236234 0.134991 0.404973 0.0142095 0.0710479 0.134991 0.0657193 0.197158 0.0426287 0.213144 0.404973 0.197158 0.408526 .

5.5. Inversion of Tridiagonal Toeplitz Matrix of Fifth Order

Let us consider the tridiagonal Toeplitz matrix of order n = 5 analyzed in [7]:
A = 2 1 0 0 0 1 2 1 0 0 0 1 2 1 0 0 0 1 2 1 0 0 0 1 2 .
The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as:
u 1 = 10 , u 2 = 36 , u 3 = 56 , u 4 = 35 , u 5 = 6 .
Therefore, it is not difficult to verify that the corresponding eigenvalues are:
λ 1 3.73205 , λ 2 3.00000 , λ 3 2.00000 , λ 4 1.00000 , λ 5 0.267949 .
By using the Dunford-Taylor’s integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained:
A 1 = k = 1 n ξ k A n k ,
with:
ξ 1 0.166667 , ξ 2 1.66667 , ξ 3 6.00000 , ξ 4 9.33333 , ξ 5 5.83333 ,
this leading to the conclusion that:
A 1 0.833333 0.666667 0.500000 0.333333 0.166667 0.666667 1.33333 1.00000 0.666667 0.333333 0.500000 1.00000 1.50000 1.00000 0.500000 0.333333 0.666667 1.00000 1.33333 0.666667 0.166667 0.333333 0.500000 0.666667 0.833333 .

6. Conclusions

We have presented a method for the inversion of non-singular tridiagonal matrices that makes use of a classical functional analysis tool, namely the Dunford-Taylor’s integral, which extends Cauchy’s integral formula to the case of operators.
Since, in the finite dimensional case, operators are represented by matrices, the general formula for the inversion of an operator [19] can be used for matrices as well [16,17]. In this article, we have applied such result to general non-singular tridiagonal matrices and, as particular cases, to Jacobi’s and Toeplitz matrices.
Since the application of Dunford-Taylor’s integral requires the knowledge of the matrix invariants, a preceding result relevant to tridiagonal Jacobi’s matrices (see [24]) has been extended and, in this way, a recursion formula for the evaluation of the invariants of a general tridiagonal matrix has been obtained. This makes it possible to apply the formulas for the inversion of a general non-singular matrix in [17] to the case of general tridiagonal matrices and, therefore, to Jacobi’s and Toeplitz matrices in particular.
The proposed technique has been validated through several test cases using the computer algebra program Mathematica©.

Author Contributions

Data curation, D.C.; Investigation, P.E.R.; Methodology, P.E.R.; Software, D.C.; Supervision, D.C.; Validation, D.C.; Writing — original draft, P.E.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schlegel, P. The explicit inverse of a tridiagonal matrix. Appl. Math. Comput. 1970, 24, 665. [Google Scholar] [CrossRef]
  2. Lewis, J.W. Inversion of tridiagonal matrices. Numer. Math. 1982, 38, 333–345. [Google Scholar] [CrossRef]
  3. Meurant, G. A review on the inverse of symmetric tridiagonal and block tridiagonal matrices. SIAM J. Matrix Anal. Appl. 1992, 13, 707–728. [Google Scholar] [CrossRef] [Green Version]
  4. Usmani, R.A. Inversion of Jacobi’s Tridiagonal Matrix. Comput. Math. Applic. 1994, 27, 59–66. [Google Scholar] [CrossRef] [Green Version]
  5. Huang, Y.; McColl, W.F. Analytical inversion of general tridiagonal matrices. J. Phys. A Math. Gen. 1997, 30, 7919. [Google Scholar] [CrossRef]
  6. Mallik, R.K. The inverse of a tridiagonal matrix. Linear Algebra Appl. 2001, 325, 109–139. [Google Scholar] [CrossRef] [Green Version]
  7. El-Mikkawy, M.; Karawia, A. Inversion of general tridiagonal matrices. Appl. Math. Lett. 2006, 19, 712–720. [Google Scholar] [CrossRef] [Green Version]
  8. Kılıç, E. Explicit formula for the inverse of a tridiagonal matrix by backward continued fractions. Appl. Math. Comput. 2008, 197, 345–357. [Google Scholar] [CrossRef]
  9. Ran, R.; Huang, T.; Liu, X.; Gu, T. An inversion algorithm for general tridiagonal matrix. Appl. Math. Mech. 2009, 30, 247–253. [Google Scholar] [CrossRef]
  10. Sugimoto, T. On an inverse formula for a tridiagonal matrix, Oper. Matrices 2012, 6, 465–480. [Google Scholar] [CrossRef]
  11. Abderramán Marrero, J.; Rachidi, M. A note on representations for the inverses of tridiagonal matrice. Linear Multilinear Algebr. 2013, 61, 1181–1191. [Google Scholar] [CrossRef]
  12. Hovda, S. Closed-form expression for the inverse of a class of tridiagonal matrices. Numer. Algebra Control Optim. 2016, 6, 437–445. [Google Scholar] [CrossRef] [Green Version]
  13. Tan, L.S.L. Explicit inverse of tridiagonal matrix with applications in autoregressive modeling. IMA J. Appl Math. 2019, 84, 679–695. [Google Scholar]
  14. Knott, G.D. Interpolating Cubic Splines; Birkhäuser: Boston, MA, USA, 2000. [Google Scholar]
  15. Pozrikidis, C. An Introduction to Grids, Graphs, and Networks; Oxford University Press: New York, NY, USA, 2014. [Google Scholar]
  16. Caratelli, D.; Ricci, P.E. A Numerical Method for Computing the Roots of Non-Singular Complex-Valued Matrices. Symmetry 2020, 12, 966. [Google Scholar] [CrossRef]
  17. Caratelli, D.; Palini, E.; Ricci, P.E. Finite dimensional applications of the Dunford-Taylor’s integral. Bull. TICMI N.1 2021, in press. [Google Scholar]
  18. Maz’ya, V.; Shaposhnikova, T. Jacques Hadamard, A Universal Mathematician; Society for Industrial and Applied Mathematics: Providence, RI, USA, 1999. [Google Scholar]
  19. Kato, T. Perturbation Theory for Linear Operators; Springer: Berlin/Heidelberg, Germany, 1966. [Google Scholar]
  20. Riesz, F.; Sz.-Nagy, B. Functional Analysis; Dover Publications Inc.: New York, NY, USA, 1990. [Google Scholar]
  21. Fantappiè, L. I massimi e i minimi dei funzionali analitici reali. Atti Acc. Naz. Lincei Rend. Sci. Fis. Matem. Nat. 1930, 6, 296–301. [Google Scholar]
  22. Cherubino, S. Calcolo Delle Matrici; Cremonese: Roma, Italy, 1957. [Google Scholar]
  23. Gantmacher, F.R. The Theory of Matrices; Chelsea Pub. Co.: New York, NY, USA, 1959. [Google Scholar]
  24. Natalini, P.; Ricci, P.E. Newton Sum Rules of polynomials defined by a three-term recurrence relation. Computers Math. Applic. 2001, 42, 767–771. [Google Scholar] [CrossRef] [Green Version]
  25. Salkuyeh, D.K. Positive integer powers of the tridiagonal Toeplitz matrices. Intern. Math. Forum 2006, 22, 1061–1065. [Google Scholar] [CrossRef]
  26. Higham, N.J. Functions of Matrices. Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  27. Hirst, H.P.; Macey, W.T. Bounding the Roots of Polynomials. Coll. Math. 1997, 28, 292–295. [Google Scholar] [CrossRef]
  28. Rudin, W. Real and Complex Analysis, 3rd ed.; McGraw-Hill: Singapore, 1987. [Google Scholar]
  29. Bruschi, M.; Ricci, P.E. An explicit formula for f(A) and the generating function of the generalized Lucas polynomials. SIAM J. Math. Anal. 1982, 13, 162–165. [Google Scholar] [CrossRef]
  30. Al-Hassan, Q.M. On Powers of Tridiagonal Matrices with Nonnegative Entries. Appl. Math. Sci. 2012, 48, 2357–2368. [Google Scholar]
  31. Al-Hassan, Q.M. On Powers of General Tridiagonal Matrices. Appl. Math. Sci. 2015, 9, 583–592. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Caratelli, D.; Ricci, P.E. Inversion of Tridiagonal Matrices Using the Dunford-Taylor’s Integral. Symmetry 2021, 13, 870. https://doi.org/10.3390/sym13050870

AMA Style

Caratelli D, Ricci PE. Inversion of Tridiagonal Matrices Using the Dunford-Taylor’s Integral. Symmetry. 2021; 13(5):870. https://doi.org/10.3390/sym13050870

Chicago/Turabian Style

Caratelli, Diego, and Paolo Emilio Ricci. 2021. "Inversion of Tridiagonal Matrices Using the Dunford-Taylor’s Integral" Symmetry 13, no. 5: 870. https://doi.org/10.3390/sym13050870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop