Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access August 4, 2020

Positive coincidence points for a class of nonlinear operators and their applications to matrix equations

  • Imed Kedim EMAIL logo , Maher Berzig and Ahdi Noomen Ajmi
From the journal Open Mathematics

Abstract

Consider an ordered Banach space and f , g two self-operators defined on the interior of its positive cone. In this article, we prove that the equation f ( X ) = g ( X ) has a positive solution, whenever f is strictly α -concave g-monotone or strictly ( α ) -convex g-antitone with g super-homogeneous and surjective. As applications, we show the existence of positive definite solutions to new classes of nonlinear matrix equations.

MSC 2010: 54H25; 47H07; 39B42

1 Introduction

The coincidence point theory is a powerful tool in nonlinear analysis for solving a wide range of nonlinear equations arising from various applications in engineering, economics and mechanics, see for instance [1,2,3,4,5,6,7,8]. In particular, nonlinear equations in Banach spaces involving α -concave and ( α ) -convex operators are considered in [9,10,11,12,13,14,15] and some references therein.

The main aim of this article is to show the existence and uniqueness of coincidence points for two operators f and g using new iterative methods, where f is strictly g-monotone α -concave or strictly g-antitone ( α ) -convex with g super-homogeneous and surjective. As applications, we show the existence of positive definite solutions to a new family of nonlinear matrix equations of type

g ( X ) = Q + i = 1 k A i f i ( X ) A ,

where Q is a positive semi definite matrix, A i are arbitrary square matrices, f i are Löwner-Heinz monotone operators and g is an Uchiyama operator [16,17,18]. Our results improve, extend and generalize some existing ones in the literature [11,15,19,20]. Furthermore, we provide two algorithms involving the Newton-Raphson method for solving new classes of nonlinear matrix equations. Next, we present some numerical experiments to illustrate their efficiency. Finally, we apply our algorithms to compute a positive root to certain polynomials. For recently developed methods to solve more nonlinear equations, we refer the reader to [21,22,23] and some references therein. The remainder of the article is structured as follows. Section 2 contains the notation and basic definitions. The main theorems of coincidence points are stated and proved in Section 3. Some particular classes of operators are investigated in Section 4. The applications and the numerical experiments are presented in Section 5.

2 Preliminaries

Let ( E , ) be a real Banach space and denote by 0 E its zero element. A nonempty subset P of E is called a cone if it satisfies λ P P for any positive real λ . A cone P is called pointed if P ( P ) = { 0 E } , solid if P , the interior of P, is nonempty, and it is called normal if there exists a positive constant c N , called the normal constant with respect to , such that 0 E x y implies x c N y . If P is a convex cone, then E is partially ordered with respect to P, that is, x y if and only if y x P . In the rest of the article, we assume that P is normal, solid, closed, convex and pointed cone of E. If x y , the set

[ x , y ] { z E : x z y } = ( x + P ) ( y P ) ,

is called an order-interval, where x , y E . The notation x y is used if y x P , and we define the set

[ x | y ] { z E : x z y } = ( x + P ) ( y P ) .

The open ball centered at z E with radius ε > 0 is denoted by B ( z , ε ) .

Lemma 2.1

Let x , y P . If x y , then for all ε > 0 , B ( y , ε ) [ x | y ] is a nonempty open set.

Proof

The element λ x + ( 1 λ ) y belongs to the set B ( y , ε ) [ x | y ] , if λ ( 0 , 1 ) and λ x y < ε .□

Definition 2.2

Let f , g : P P be two operators. We say that:

  1. f is strictly α -concave if there exists a constant α ( 0 , 1 ) such that f ( t x ) t α f x , or equivalently f ( t 1 x ) t α f x , for all t ( 0 , 1 ) and x P .

  2. f is strictly ( α ) -convex if there exists a constant α ( 0 , 1 ) such that f ( t x ) t α f x , or equivalently f ( t 1 x ) t α f x , for all t ( 0 , 1 ) and x P .

Definition 2.3

Let g : P P be an operator. We say that g is strictly super-homogeneous if g ( t x ) t g x , or equivalently g ( t 1 x ) t 1 g x , for all t ( 0 , 1 ) and x P .

Definition 2.4

Let f : P P be an operator. We say that:

  1. f is strictly monotone (resp. antitone) if x y f x f y (resp. f y f x ).

  2. f is strictly reverse monotone (resp. antitone) if f x f y x y (resp. y x ).

  3. f is strictly inverse monotone if f is invertible and the inverse of f is strictly monotone.

Definition 2.5

Let f , g : P P be two operators. We say that:

  1. f is strictly g-monotone if g x g y f x f y .

  2. f is strictly g-antitone if g x g y f y f x .

Remark 2.6

Every strictly inverse monotone (resp. antitone) operator is strictly reverse monotone (resp. antitone). The concept of strictly monotone (resp. antitone) is a particular case of the notion of strictly g-monotone (resp. strictly g-antitone), where g is the identity operator.

As an immediate consequence, we have the following result.

Lemma 2.7

Let f , g : P P be two operators. Assume that g is invertible and its inverse is strictly monotone. Then, if f is strictly monotone (resp. strictly antitone), then f is strictly g-monotone (resp. strictly g-antitone).

Further examples of strictly monotone operators are presented in Section 5.

3 Coincidence point theorems

The set of coincidence points of f and g in P will be denoted by C ( f , g ) { x P : f x = g x } , and it will be identified with its only element if it is a singleton set. A point x P will be called a common fixed point if f x = g x = x . Now, we are ready to state our main results.

Theorem 3.1

Let f , g : P P be operators. Assume that g is surjective and strictly super-homogeneous and that f is strictly α -concave and strictly g-monotone. Then, C ( f , g ) is nonempty and g ( C ( f , g ) ) is a singleton. More precisely, if { ε n } is a decreasing real sequence convergent to zero, then there exist two sequences { u n } and { v n } in P such that for all n 1 , we have

(1) g u n + 1 f u n ε n , g v n + 1 f v n ε n ,

(2) f u n 1 g u n + 1 f u n f v n g v n + 1 f v n 1 ,

then { g u n } and { g v n } converge to g ( C ( f , g ) ) . In addition, f and g have a unique common fixed point if and only if g ( C ( f , g ) ) C ( f , g ) . In particular, if C ( f , g ) C ( f g , g f ) , then f and g have a unique common fixed point.

Theorem 3.2

Let f , g : P P be operators. Assume that g is surjective and strictly super-homogeneous, and that f is strictly ( α ) -convex and strictly g-antitone. Then, C ( f , g ) is nonempty and g ( C ( f , g ) ) is a singleton. More precisely, if { ε n } is a decreasing real sequence convergent to zero, then there exist two sequences { u n } and { v n } in P such that for all n 1 , we have

(3) g u n + 1 f v n ε n , g v n + 1 f u n ε n ,

(4) f v n 1 g u n + 1 f v n f u n g v n + 1 f u n 1 ,

then { g u n } and { g v n } converge to g ( C ( f , g ) ) . In addition, f and g have a unique common fixed point if and only if g ( C ( f , g ) ) C ( f , g ) . In particular, if C ( f , g ) C ( f g , g f ) , then f and g have a unique common fixed point.

To prove the theorem, we need the following lemmas. The next lemma give a sufficient condition to the existence of sequences satisfying (1) and (2).

Lemma 3.3

Let f , g : P P be two operators such that g is surjective and f is strictly g-monotone. Assume that there exist x 0 and y 0 in P such that

(5) g x 0 f x 0 f y 0 g y 0 .

Then, for all decreasing real sequences { ε n } converging to zero, there exist two sequences { u n } and { v n } in P such that (1) and (2) hold.

Proof

Let u 0 = x 0 and v 0 = y 0 . By hypothesis, using the surjectivity of g and Lemma 2.1, there exist u 1 , v 1 P such that g u 1 B ( f u 0 , ε 0 ) [ g u 0 | f u 0 ] and g v 1 B ( f v 0 , ε 0 ) [ f v 0 | g v 0 ] , then the elements u 0 , u 1 , v 0 , v 1 satisfy g u 0 g u 1 f u 0 f v 0 g v 1 g v 0 . As f is strictly g-monotone, we obtain g u 0 g u 1 f u 0 f u 1 f v 1 f v 0 g v 1 g v 0 . Using again the surjectivity of g there exist u 2 , v 2 such that g u 2 B ( f u 1 , ε 1 ) [ f u 0 | f u 1 ] and g v 2 B ( f v 1 , ε 1 ) [ f v 1 | f v 0 ] . By construction, the elements u 0 , u 1 , u 2 , v 0 , v 1 , v 2 satisfy (1) and (2) for n = 1 . For the induction step, assume that until some positive integer n and for all k n , there exist { u k } , { v k } in P satisfying (1) and (2). Hence, it follows that

(6) g u n 1 f u n 2 g u n f u n 1 f v n 1 g v n f v n 2 g v n 1 .

As f is strictly g-monotone, we get

(7) f u n 1 f u n f v n f v n 1 .

Combining (6) and (7), we obtain

(8) g u n f u n 1 f u n f v n f v n 1 g v n .

As before, from the surjectivity of g, there exist u n + 1 , v n + 1 P such that g u n + 1 B ( f u n , ε n ) [ f u n 1 | f u n ] and g v n + 1 B ( f v n , ε n ) [ f v n | f v n 1 ] . Consequently, (1) and (2) hold for all n.□

The following lemma gives a sufficient condition to the existence of sequences satisfying (3) and (4).

Lemma 3.4

Let f , g : P P be two operators such that g is surjective and f is g-antitone. Assume that there exist x 0 and y 0 in P such that

(9) g x 0 f y 0 f x 0 g y 0 .

Then, for all decreasing real sequences { ε n } convergent to zero, there exist two sequences { u n } , { v n } in P such that (3) and (4) hold.

Proof

Let u 0 = x 0 and v 0 = y 0 . By hypothesis, using the surjectivity of g and Lemma 2.1, there exist u 1 , v 1 P such that g u 1 B ( f v 0 , ε 0 ) [ g u 0 | f v 0 ] and g v 1 B ( f u 0 , ε 0 ) [ f u 0 | g v 0 ] , then we have u 0 , u 1 , v 0 , v 1 satisfy g u 0 g u 1 f v 0 f u 0 g v 1 g v 0 . As f is strictly g-antitone, we obtain g u 0 g u 1 f v 0 f v 1 f u 1 f u 0 g v 1 g v 0 . Using again the surjectivity of g there exist u 2 , v 2 such that g u 2 B ( f v 1 , ε 1 ) [ f v 0 | f v 1 ] and g v 2 B ( f u 1 , ε 1 ) [ f u 1 | f u 0 ] . By construction, the elements u 0 , u 1 , u 2 , v 0 , v 1 , v 2 satisfy (3) and (4) for n = 1 . For the induction step, assume that until some positive integer n and for all k n 1 , there exist { u k } , { v k } in P satisfying (3) and (4). Hence, it follows that

(10) g u n 1 f v n 2 g u n f v n 1 f u n 1 g v n f u n 2 g v n 1 .

As f is strictly g-antitone, we get

(11) f v n 1 f v n f u n f u n 1 .

Combining (10) and (11), we obtain

(12) g u n f v n 1 f v n f u n f u n 1 g v n .

As before, from the surjectivity of g, there exist u n + 1 , v n + 1 P such that g u n + 1 B ( f v n , ε n ) [ f v n 1 | f v n ] and g v n + 1 B ( f u n , ε n ) [ f u n | f u n 1 ] . Consequently, (3) and (4) hold for all n.□

Condition (5) (resp. (9)) is obtained by the following lemma for our classes of operators.

Lemma 3.5

Let f , g : P P be two operators such that g is surjective and strictly super-homogeneous and f is strictly α -concave and strictly g-monotone (resp. strictly ( α ) -convex and strictly g-antitone). Then, there exist x 0 , y 0 P satisfying (5) (resp. (9)).

Proof

Let h P . Choose t 0 ( 0 , 1 ) sufficiently small such that

(13) t 0 ( 1 α ) g h f h t 0 ( α 1 ) g h .

Assume that g is strictly super-homogeneous. For x 0 = t 0 h and y 0 = t 0 1 h , we deduce that

g y 0 g x 0 = g y 0 g ( t 0 2 y 0 ) g y 0 t 0 2 g y 0 = ( 1 t 0 2 ) g y 0 ,

which implies that

(14) g x 0 g y 0 .

Since f is strictly α -concave, it follows from (13) that

f x 0 = f ( t 0 h ) t 0 α f h t 0 g h g ( t 0 h ) = g x 0 ,

f y 0 = f ( t 0 1 h ) t 0 α f h t 0 1 g h g ( t 0 1 h ) = g y 0 .

Now, using (14) combined with the fact that f is strictly g-monotone, we obtain (5). Assume now that f is strictly g-antitone and strictly ( α ) -convex. Then, it follows from (13) that

f x 0 = f ( t 0 h ) t 0 α f h t 0 1 g h g ( t 0 1 h ) = g y 0 ,

f y 0 = f ( t 0 1 h ) t 0 α f h t 0 g h g ( t 0 h ) = g x 0 .

Now, from (14) and the fact that f is strictly g-antitone, we obtain (9).□

The following two technical lemmas are needed to prove the convergence of the sequences.

Lemma 3.6

Let f , g : P P be two operators. Assume that g is strictly super-homogeneous and f is strictly α -concave and strictly g-monotone. Let { u n } , { v n } be sequences satisfying (1) and (2) and t n sup { 0 < t 1 : g u n t g v n } , for all n 0 . Then, lim n t n = 1 .

Proof

Observe first that from definition of t n and (2) it follows that g u n + 1 g u n t n g v n t n g v n + 1 , so we have t n t n + 1 for all n, which implies 0 < t 0 t 1 t n 1 . Then, there is t ( 0 , 1 ] such that lim n t n = t . We claim that t = 1 . Assume by contradiction that t < 1 .

Observe first that from the super-homogeneity of g, we have

(15) g u n t n g v n g ( t n v n ) .

Since f is strictly α -concave and strictly g-monotone, it follows from (15) that

g u n + 2 f u n f ( t n v n ) = f 2 t n 1 + t 1 + t 2 v n 2 t n 1 + t 1 + t 2 α f v n 2 t n 1 + t 1 + t 2 α g v n + 2 .

By definition of t n , it follows that

t n + 2 2 t n 1 + t 1 + t 2 α ,

which yields a contradiction when n tends to infinity. Consequently, we have lim n t n = 1 .□

Lemma 3.7

Let f , g : P P be two operators. Assume that g is strictly super-homogeneous and f is strictly ( α ) -convex and g-antitone. Let { u n } , { v n } be sequences satisfying (3) and (4) and t n sup { 0 < t 1 : g u n t g v n } , for all n 0 . Then, lim n t n = 1 .

Proof

Observe first that from definition of t n , we have g u n t n g v n for all n 0 . In particular, from (4) follows that g u n + 1 g u n t n g v n t n g v n + 1 , so we have t n t n + 1 for all n, which implies 0 < t 0 t 1 t n 1 . Then, there is t [ t n , 1 ] for all n 0 such that lim n t n = t . We claim that t = 1 . Assume by contradiction that t < 1 .

Observe first that from the super-homogeneity of g, we have

(16) g ( t n 1 u n ) t n 1 g u n g v n .

Since f is strictly ( α ) -convex and g-antitone, it follows from (16) that

g u n + 2 f v n f ( t n 1 u n ) = f 2 t n 1 + t 1 1 + t 2 1 u n 2 t n 1 + t 1 + t 2 α f u n 2 t n 1 + t 1 + t 2 α g v n + 2 .

By definition of t n + 2 , it follows that

t n + 2 2 t n 1 + t 1 + t 2 α ,

which yields a contradiction when n tends to infinity. Consequently, we have lim n t n = 1 .□

Next, we show that our sequences converge, and we deduce that the set of coincidence points is nonempty.

Lemma 3.8

Under the hypotheses of Lemma 3.6 (resp. Lemma 3.7). If { u n } , { v n } be sequences satisfying (2) (resp. (4)), then the sequences { g u n } and { g v n } converge to g v , where v C ( f , g ) .

Proof

For any positive integers n and k, it follows from (2) (resp. (4)) and Lemma 3.6 (resp. Lemma 3.7) that

0 E g u n + k g u n g v n g u n ( 1 t n ) g v n ( 1 t n ) g v 0 , 0 E g v n g v n + k g v n g u n ( 1 t n ) g v n ( 1 t n ) g v 0 .

Thus, we deduce from the normality of P that

g u n + k g u n c N ( 1 t n ) g v 0 and g v n + k g v n c N ( 1 t n ) g v 0 ,

where c N > 0 is the constant of normality of P. Hence, by using Lemma 3.6 (resp. Lemma 3.7), we deduce that { g u n } and { g v n } are Cauchy sequences. Therefore, the sequences { g u n } and { g v n } converge to some w 1 , w 2 E , respectively. As the sequences { g u n } and { g v n } are included in the closed order interval [ g u 0 , g v 0 ] , we deduce that their limits are in [ g u 0 , g v 0 ] P . Therefore, there exist u , v P such that w 1 = g u and w 2 = g v . Now, since g u , g v [ g u n , g v n ] and g u n t n g v n for all n 0 , it follows that 0 E g v g u g v n g u n ( 1 t n ) g v 0 . So, by the normality of P, we get g v g u c N ( 1 t n ) g v 0 , and then, g v = g u . Hence, for all n 0 we have g u n g v g v n , then from the fact that f is strictly g-monotone (resp. f is strictly g-antitone), we obtain

g u n f u n f v f v n g v n ( resp . g u n f v n f v f u n g v n ) .

Consequently, as n tends to infinity, we obtain g v f v g v . Therefore, g v = f v which proves that v C ( f , g ) .□

The following lemma shows that the image by g of the set of coincidence points is reduced to a singleton.

Lemma 3.9

Let f , g : P P be two operators such that g is surjective and strictly super-homogeneous, and f is strictly α -concave and strictly g-monotone (resp. strictly ( α ) -convex and strictly g-antitone). Then, g ( C ( f , g ) ) is a singleton.

Proof

From Lemma 3.5, there exist x 0 , y 0 P satisfying (5) (resp. (9)). Then by Lemma 3.3 (resp. Lemma 3.4), there exist two sequences { u n } and { v n } satisfying (1) and (2) (resp. (3) and (4)). Then using Lemma 3.8, we see that C ( f , g ) is nonempty. Let x , y C ( f , g ) and define t sup { 0 < t 1 : g y [ t g x , t 1 g x ] } . Observe first that we have g y [ t g x , t 1 g x ] with t ( 0 , 1 ] . Next, we claim that t = 1 . To this purpose assume that 0 < t < 1 . Since g is strictly super-homogeneous, then we get

g ( t x ) t g x g y t 1 g x g ( t 1 x ) .

So, if we assume that f is strictly α -concave and strictly g-monotone, we have

t α g x = t α f x f ( t x ) f y f ( t 1 x ) t α f x = t α g x .

However, if we assume that f is strictly ( α ) -convex and strictly g-antitone, we obtain

t α g x = t α f x f ( t 1 x ) f y f ( t x ) t α f x = t α g x .

Consequently, from the definition of t , we obtain t α t where α ( 0 , 1 ) , which is a contradiction. Therefore, t = 1 and so g x = g y , which implies g ( C ( f , g ) ) is a singleton.□

Using some commutativity conditions, we show next the existence of common fixed point of f and g.

Lemma 3.10

Under the hypotheses of Lemma 3.9, f and g have a unique common fixed point z in P if and only if z = g ( C ( f , g ) ) C ( f , g ) . In particular, if C ( f , g ) C ( f g , g f ) , then f and g have a unique common fixed point.

Proof

If f and g have a unique common fixed point z in P , then we have f ( z ) = g ( z ) = z which obviously means that the unique point z = g ( C ( f , g ) ) C ( f , g ) . Conversely, if g ( C ( f , g ) ) C ( f , g ) , then f z = g z = g ( C ( f , g ) ) = z . If z 1 , z 2 are two common fixed points, then z 1 = g z 1 , z 2 = g z 2 g ( C ( f , g ) ) , which is a singleton. In particular, if C ( f , g ) C ( f g , g f ) , for z C ( f , g ) , we have g z = f z . It follows from C ( f , g ) C ( f g , g f ) that g 2 z = g f z = f g z . Thus, g z C ( f , g ) .□

Proof of Theorem 3.1

(resp. Theorem 3.2). At first, observe that C ( f , g ) is nonempty follows from Lemma 3.8 and g ( C ( f , g ) ) is a singleton follows from Lemma 3.9. More precisely, we deduce, by lemmas and { v n } in P such that (1) and (2) (resp. (3)and (4)) and 3.5, that there exist two sequences { u n } and { v n } satisfying (1) and (2) (resp. (3) and (4)). Moreover, by lemmas 3.8 and 3.9, the sequences { g ( u n ) } and { g ( v n ) } converge to g ( C ( f , g ) ) . Finally, the uniqueness result of the common fixed point follows from Lemma 3.10.□

4 Coincidence point theorems for inverse monotone operators

Consider the finite dimensional real Hilbert space. Assume that E is the Banach space of self-adjoint linear operators on , and P its positive cone of semi-definite operators (i.e., P { A E : A v , v 0 for all v } ). An operator A P is said to be positive definite, if A v , v > 0 for all v \ { 0 } , and the set of all positive definite operators is denoted by P + . It is well known that P = P + (see, for instance, [24]). Generally, the monotonic dependence of the operator f with g may be difficult to perform. An important subclass of such operators has been investigated in the literature, when g is an inverse monotone operator and f is a monotone or an antitone operator. Therefore, in this section, we give a particular attention to study such operators and establish our first results for this subclass. First, we recall the theorem of Furuta [18], and we show some preliminary results needed in the sequel.

Theorem 4.1

[18, Theorem 3.1] Let X and Y be strictly positive operators on . If X Y , then f ( X ) f ( Y ) for any nonconstant operator monotone function f on ( 0 , ) .

In order to extend Furuta’s theorem we need the following lemmas.

Lemma 4.2

If X Y , then A X A A Y A for every invertible operator A.

Proof

The result follows from the fact that for all nonzero vector u, if A is invertible, then

Y X u , Y u > u , X u A u , Y A u > A u , X A u .

Therefore,

u , A Y A u = A u , Y A u > A u , X A u = u , A X A u .

Lemma 4.3

Let X and Y be strictly positive operators on . If X Y , then Y 1 X 1 .

Proof

At first, note that if X is strictly positive operator, then X v , v > 0 for all v 0 , which means that X is invertible. We show next that T T 1 is a strictly antitone on commuting positive operators. Let 0 X Y such that X and Y commute, then X 1 / 2 , Y 1/2 , X and Y also commute. Since, for all nonzero vector v, we have

( Y X ) v , v > 0 .

Then, for all nonzero vector v and w = Y 1 / 2 X 1 / 2 v , we have

( X 1 Y 1 ) v , v = ( X 1 Y 1 ) X 1 / 2 Y 1 / 2 w , X 1 / 2 Y 1 / 2 w = Y 1 ( Y X ) X 1 X 1 / 2 Y 1 / 2 w , X 1 / 2 Y 1 / 2 w = ( Y X ) w , w > 0 .

Now, for non-necessarily commuting operators X and Y which satisfy 0 X Y , we have, by Lemma 4.2, Y 1 / 2 X Y 1 / 2 I . As Y 1 / 2 X Y 1 / 2 and I commute, then I Y 1 / 2 X 1 Y 1 / 2 . Using again Lemma 4.2, we obtain Y 1 X 1 .□

Corollary 4.4

Let X and Y be strictly positive operators on . If X Y , then g ( Y ) g ( X ) for any nonconstant operator antitone function g on ( 0 , ) .

Proof

The function f = 1 / g is operator monotone on P . Then, the result is a direct consequence of Theorem 4.1 and Lemma 4.3.□

Theorem 4.5

Let F , G : P P be nonconstant operators. Assume that

  1. F is strictly α -concave (resp. strictly ( α ) -convex) and strictly monotone (resp. strictly antitone) operator.

  2. G is surjective, strictly super-homogeneous and strictly inverse monotone operator.

Then, the equation

(17) G ( X ) = F ( X ) , X P .

has a solution.

Proof

Since from (i) and (ii), F is a G-strictly monotone (resp. antitone) operator, then all hypotheses of Theorem 3.1 are fulfilled.□

Lemma 4.6

Let X , Y be two commuting strictly positive operators such that X Y , and g : ( 0 , ) ( 0 , ) be a strictly monotone function. Then, g ( X ) g ( Y ) .

Proof

The commutativity hypothesis implies that there exists a unitary matrix Q such that X = Q D X Q and Y = Q D Y Q , where D X = diag ( λ 1 , , λ n ) and D Y = diag ( μ 1 , , μ n ) are diagonal matrices. If X Y , then using Lemma 4.2, we deduce that λ i < μ i for all i, and as g is monotone it follows that g ( λ i ) < g ( μ i ) for all i. Therefore, g ( D X ) g ( D Y ) , which implies that g ( X ) g ( Y ) .□

Lemma 4.7

Let X, Y be two commuting operators and f a real function. Then, the operators f ( X ) and f ( Y ) commute.

Proof

The proof is straightforward, hence omitted.□

Proposition 4.8

If g , g : ( 0 , ) ( 0 , ) are two surjective strictly super-homogeneous inverse monotone operators, then so is g g .

Proof

Since g and g are surjective and inverse monotone, then we deduce that g g is surjective and inverse monotone operator. It remains to prove that g g is strictly super-homogeneous. By hypothesis, g is strictly super-homogeneous and by Lemma 4.7, g ( t X ) and g ( X ) commute. Then using Lemma 4.6, we deduce that ( g g ) ( t X ) = g ( g ( t X ) ) g ( t g ( X ) ) t ( g g ) ( X ) , for all t ( 0 , 1 ) .□

Proposition 4.9

Let F , F : P P are two strictly monotone operators. Assume that F is strictly α -concave (resp. ( α ) -convex) and F is strictly β -concave (resp. ( β ) -convex), then for all a , b > 0 , we have:

  1. F F and a F + b F are strictly monotone operators.

  2. a F + b F is strictly max { α , β } -concave (resp. ( max { α , β } ) -convex).

  3. F F is strictly α β -concave (resp. ( α β ) -convex).

From Propositions 4.8 and 4.9, as a consequence of Theorem 4.5, we deduce immediately the following result.

Theorem 4.10

Let { g i } i = 1 n be a family of nonconstant surjective strictly super-homogeneous inverse monotone operator functions on ( 0 , ) . Let { F j } j = 1 m be a family of strictly α -concave (resp. strictly ( α ) -convex) monotone (resp. antitone) operator on P , and J a family of nonempty subset of [ 1 , m ] . Then, the nonlinear equation,

g 1 g k ( X ) = Q + I J σ S I A I , σ F I , σ ( X ) A I , σ ,

has a positive solution, where Q is a semi-definite positive matrix, S I is the permutation group of the set I = { i 1 , , i k } , A I , σ are invertible matrices and F I , σ = F σ ( i 1 ) F σ ( i k ) .

5 Applications

In this section, we show at first that some existing monotone operators in the literature are already strictly super-homogeneous and surjective. Next, we derive a corollary from a previous result and solve nonlinear equations involving Uchiyama’s operator functions [16,17]. We construct two algorithms for monotone and antitone operators, and we apply them to solve several nonlinear matrix equations. We also investigate the unidimensional case and we show that our approach may be used to compute polynomial roots satisfying the Descartes condition. We present some numerical experiments confirming the efficiency of the constructed algorithm.

Theorem 5.1

(Löwner-Heinz inequality). The operator function t α is monotone for all α [ 0 , 1 ] .

Theorem 5.2

(Uchiyama [16,17]) Let g 1 ( x ) and g 2 ( x ) be functions defined on [ 0 , ) by

(18) g 1 ( x ) = i = 1 k ( x + a i ) γ i and g 2 ( x ) = x α exp ( x ) ,

where γ i > 0 , γ 1 1 , 0 = a 1 < a 2 < < a k and α 1 . Then, the inverse of the operator functions g 1 and g 2 are operator monotone functions on [ 0 , ) .

Lemma 5.3

If g : ( 0 , ) ( 0 , ) is a surjective function, then the operator g : P + P + is surjective. In particular, g 1 and g 2 defined in (18) are surjective.

Proof

Let g : ( 0 , ) ( 0 , ) be a surjective function. It is clear that g : P + P + is well defined. Let A P + and let Q be an unitary matrix such that A = Q D Q with D = diag ( λ 1 , , λ n ) . By the surjectivity of the function f, there exist μ 1 , , μ n ( 0 , ) such that λ i = g ( μ i ) . For B = Q D Q , where D = diag ( μ 1 , , μ n ) , we have g ( B ) = Q f ( D ) Q = Q D Q = A .□

Lemma 5.4

Let X 1 , X 2 , X 3 and X 4 be four commuting strictly positive operators such that X 1 X 2 and X 3 X 4 . Then, X 1 r X 3 s X 2 r X 4 s for all r , s > 0 .

Proof

By the commutativity hypothesis, there exists a unitary matrix Q such that Q X i Q = D i , where D i is a diagonal matrix for i = 1 , , 4 . Since X 1 X 2 and X 3 X 4 , then by Lemma 4.2, we obtain D 1 D 2 and D 3 D 4 , which implies that D 1 r D 3 s D 2 r D 4 s . Using again Lemma 4.2, we infer the result.□

Lemma 5.5

The operator functions g 1 and g 2 defined in (18) are strictly super-homogeneous.

Proof

Note that, for all reals t and a, we have t X + a X + a and t γ X t X , whenever t ( 0 , 1 ) and γ 1 . Now, as the operators X + a and t X + a commute, then using Lemma 5.4, we get

g 1 ( t X ) t X i = 2 k ( X + a i ) γ i = t g 1 ( X ) .

Next, observe that exp ( t X ) exp ( X ) and t γ X γ t X γ for all t ( 0 , 1 ) and γ 1 . We apply Lemma 5.4 again and obtain the super-homogeneity of g 2 .□

Next, we give an example of strictly super-homogeneous inverse monotone operator of Riccati type.

Proposition 5.6

Let A P . The operator g 3 : P P , X X A X is strictly super-homogeneous and bijective where its inverse is the monotone operator X A 1 / 2 ( A 1 / 2 X A 1 / 2 ) 1 / 2 A 1 / 2 .

Proof

The super-homogeneity of the operator X X A X and the monotony of the operator X A 1 / 2 ( A 1 / 2 X A 1 / 2 ) 1 / 2 A 1 / 2 are straightforward. For the remainder of the proof see [25, Exercise 1.2.13].□

Corollary 5.7

Let g 1 and g 2 be the functions defined in Theorem 5.2. Let f k , f ˆ k : P P be the following operators

f 1 ( X ) = i = 1 r 1 ( I + A i X A i ) a i , f 2 ( X ) = j = 1 r 2 ( I + B j X 1 B j ) b j , f 3 ( X ) = p = 1 r 3 q = 1 r 4 C p , q X c p , q C p , q 1 1 , f ˆ 1 ( X ) = i = 1 r 1 ( I + A i X A i ) a i , f ^ 2 ( X ) = i = 1 r 2 ( I + B i X 1 B i ) b i , f ˆ 3 ( X ) = p = 1 r 3 q = 1 r 4 C p , q X c p , q C p , q 1 ,

where r 1 , r 2 , r 3 are positive integers, I is the identity matrix, A i , B i and C p , q are arbitrary invertible matrices. Let Q P and M k be arbitrary square matrices for k = 1 , 2 , 3 . If a i , b j , c p , q ( 0 , 1 ) for all i [ 1 , r 1 ] , j [ 1 , r 2 ] , p [ 1 , r 3 ] and q [ 1 , r 4 ] , then each of the following nonlinear matrix equations

(19) g 1 n 1 g 2 m 1 g 1 n k g 2 m k ( X ) = Q + M 1 f 1 ( X ) M 1 + M 2 f 2 ( X ) M 2 + M 3 f 3 ( X ) M 3 ,

(20) g 1 n 1 g 2 m 1 g 1 n k g 2 m k ( X ) = Q + M 1 f ^ 1 ( X ) M 1 + M 2 f ^ 2 ( X ) M 2 + M 3 f ^ 3 ( X ) M 3 ,

has a positive solution, where k, n i and m i are arbitrary integers for all i [ 1 , k ] .

Proof

Observed first that g 1 and g 2 are nonconstant, surjective, strictly super-homogeneous and inverse monotone operator functions on ( 0 , ) . Next, using the Löwner-Hans theorem and Lemmas 4.3 and 4.2, we show easily that f k , (resp. f ^ k ) are strictly α k -concave (resp ( α k )-convex), where α 1 = α 2 = max i { r i } and α 3 = max i , j { s i , j } . Consequently, the result follows from Theorem 4.10.□

Example 5.8

System (19) includes the following nonlinear matrix equations as particular cases:

  1. X = Q + i = 1 n A i X q i A i , 0 < q < 1 (see Duan et al [10], Li et al. [15] and Ameer et al. [26]).

  2. X A X q A = Q , q ( 0 , 1 ) (see Hasanov [19]).

  3. X s A X q A = Q , s q (see Meng and Kim [20]).

Next, we give two algorithms for solving nonlinear equations.

Algorithm 1. Case of the matrix Eq. (19).
1: Input: the operators f and g, the tolerance tol and the number of iteration iMax.
2: Find t 0 and u 0 satisfying (5)
3: Solve g u = ( 1 t 0 ) f u 0 + t 0 g u 0 and compute the error = f u g u
4: Set u 1 = u
5: while error > tol do                     Newton-Raphson method
6:  Solve g u = ( 1 t 0 ) f u 1 + t 0 f u 0 and compute the error = f u g u
7:  Exit if iMax is reached or the condition (2) is not satisfied
8:  Set f u 0 = f u 1 and f u 1 = f u
9: end while
10: Output: u the positive definite solution of Eq. (19).
Algorithm 2. Case of the matrix Eq. (20).
1: Input: the operators f and g, the tolerance tol and the number of iterations iMax.
2: Find t 0 , u 0 and v 0 satisfying (9)
3: Solve g u = ( 1 t 0 ) f v 0 + t 0 g u 0 and g v = ( 1 t 0 ) f u 0 + t 0 g v 0
4: Compute the error = max { f u g u , f v g v }
5: Set u 1 = u and v 1 = v
6: while error > tol do                     Newton-Raphson method
7:  Solve g u = ( 1 t 0 ) f v 1 + t 0 f v 0 and g v = ( 1 t 0 ) f u 1 + t 0 f u 0
8:  Compute the error = max { f u g u , f v g v }
9:  Exit if iMax is reached or condition (4) is not satisfied
10: Set f u 0 = f u 1 , f v 0 = f v 1 , f u 1 = f u and f v 1 = f v
11: end while
12: Output: u the positive definite solution of Eq. (20).

Remark 5.9

Note that equations g u = ( 1 t 0 ) f u 0 + t 0 g u 0 and g u = ( 1 t 0 ) f u 1 + t 0 f u 0 of Algorithm 5 and equations g u = ( 1 t 0 ) f v 0 + t 0 g u 0 , g v = ( 1 t 0 ) f u 0 + t 0 g v 0 , g u = ( 1 t 0 ) f v 1 + t 0 f v 0 and g v = ( 1 t 0 ) f u 1 + t 0 f u 0 of Algorithm 5 are solved by the Newton-Raphson method with initial guess close to zero.

The following numerical experiments are carried out for the initial guesses X 0 = Y 0 = 10 3 I and the constant matrices

(21) U = 2 2 1 3 2 3 1 2 2 , V = 2 4 8 4 4 4 7 3 1 and Q = 4 2 1 2 6 3 1 3 7 .

Example 5.10

Consider the following nonlinear matrix equation

(22) X 2 + X = Q + U X 1 / 2 U + V X 1 / 3 V ,

resp.

(23) X 2 + X = Q + U X 1 / 2 U + V X 1 / 3 V .

By Corollary 5.7, Eq. (22), resp. (23), has a positive definite solution, since (22) follows from Eq. (19), resp. (20), by taking k = 1 , n 1 = 1 , m 1 = 0 , g 1 ( x ) = x ( x + 1 ) , M 1 = M 2 = 0 , M 3 = I , r 3 = 1 , r 4 = 2 , c 1 , 1 = 1 2 , c 1 , 2 = 1 3 , resp. c 1 , 1 = 1 2 , c 1 , 2 = 1 3 , C 1 , 1 = U and C 1 , 2 = V . Using Algorithm 5, resp. Algorithm 5, we obtain the following approximate solution of Eq. (22), resp. Eq. (23), after 22 iterations

X 22 = 13.992350147612 0.451964873128 0.122298155191 0.451964873128 10.875503975397 1.884559066264 0.122298155191 1.884559066264 14.597646330214 ,

resp.

X 22 = Y 22 = 5.87748108914077 0.10066316313069 0.02098827705064 0.10066316313069 4.91430570936858 0.82493481690477 0.02098827705064 0.82493481690477 6.69047330454621 ,

with the error estimation X 22 2 + X 22 Q U X 22 1 / 2 U V X 22 1 / 3 V 2 = 2.0 × 10 13 , resp. X 22 2 + X 22 Q M 1 X 22 1 / 2 M 1 M 2 X 22 1 / 3 M 2 2 = 4.4 × 10 14 .

Example 5.11

The nonlinear matrix equation

(24) j = 0 6 ( X + j I ) = Q + U ( I + X ) 1 / 2 U + V ( I + X 1 ) 1 / 3 V ,

resp.

(25) j = 0 6 ( X + j I ) = Q + U ( I + X ) 1 / 2 U + V ( I + X 1 ) 1 / 3 V .

Eq. (24), resp. Eq. (25), follows from Eq. (19), resp. Eq. (20), by taking k = 1 , n 1 = 1 , m 1 = 0 , g 1 ( x ) = j = 0 6 ( x + j ) , M 1 = M 2 = I , M 3 = 0 , r 1 = r 2 = 1 , a 1 = 1 2 , b 1 = 1 3 , A 1 = U and B 1 = V . Using Algorithm 5, resp. Algorithm 5, we obtain the following approximate solution of Eq. (24), resp. Eq. (25), after 19 iterations, resp. 22 iterations

X 19 = 0.05492899089403 0.00637836705832 0.00438239263599 0.00637836705832 0.03981342625499 0.01419254290757 0.00438239263599 0.01419254290757 0.06103245551386 ,

resp.

X 22 = Y 22 = 0.14554684990904 0.00218735236784 0.00398188778218 0.00218735236784 0.10940957866430 0.02158828596070 0.00398188778218 0.02158828596070 0.17148995243330 ,

with the error estimation k = 0 6 ( X 19 + k I ) Q U ( I + X 19 U ) 1 / 2 V ( I + X 19 1 ) 1 / 3 V 2 = 5.8 × 10 12 , resp. k = 0 6 ( X 22 + k I ) Q U ( I + X 22 ) 1 / 2 U V ( I + X 22 ) 1 / 3 V 2 = 1.2 × 10 13 .

Example 5.12

Consider the nonlinear matrix equation

(26) X 3 / 2 exp ( X ) = Q + U X 1 / 2 U + U ( I + X ) 1 / 3 U ,

resp.

(27) X 3 / 2 exp ( X ) = Q + U X 1 / 2 U + U ( I + X ) 1 / 3 U .

Eq. (26), resp. Eq. (27), follows from Eq. (19), resp. Eq. (20), by taking k = 1 , n 1 = 0 , m 1 = 1 , g 2 ( x ) = x 3 / 2 exp ( x ) , M 1 = M 3 = I , M 2 = 0 , r 1 = r 3 = r 4 = 1 , a 1 = 1 3 , c 1 , 1 = 1 2 and A 1 = C 1 , 1 = U . Using Algorithm 5, resp. Algorithm 5, we obtain the following approximate solution Eq. (26), resp. Eq. (27), after 15 iterations

X 15 = 2.41259163868507 0.18595925781130 0.19682687712695 0.18595925781130 2.39839598599567 0.19404230898730 0.19682687712695 0.19404230898730 2.45169488926669 ,

resp.

X 15 = Y 15 = 2.03394095971354 0.16618750281669 0.17364353452140 0.16618750281669 1.96913461734563 0.18897565538307 0.17364353452140 0.18897565538307 2.10078599153675 ,

with the error estimation X 15 3 / 2 exp ( X 15 ) Q U X 15 1 / 2 U U ( I + X 15 ) 1 / 3 U 2 = 7.2 × 10 13 , resp. X 15 3 / 2 exp ( X 15 ) Q U X 15 1 / 2 U U ( I + X 15 ) 1 / 3 U 2 = 7.2 × 10 14 .

Example 5.13

Consider the nonlinear matrix equation

(28) ( X A X ) 3 + 3 ( X A X ) 2 + 2 ( X A X ) = I + B ( I + X ) 1 / 2 B + C ( I + X 1 ) 1 / 3 C + D X 1 / 4 D ,

resp.

(29) ( X A X ) 3 + 3 ( X A X ) 2 + 2 ( X A X ) = I + B ( I + X ) 1 / 2 B + C ( I + X 1 ) 1 / 3 C + D X 1 / 4 D ,

where

A = 3 2 1 2 5 2 1 2 7 , B = 2 1 1 1 2 0 0 0 2 , C = 1 0 0 5 3 0 3 10 10 and D = 3 1 1 3 3 0 0 1 1 .

Eq. (28), resp. (29), is equivalent to the operator equation g ( X ) = f ( X ) , with g = g 1 g 3 , where g 1 ( x ) = x ( x + 1 ) ( x + 2 ) an operator function of Uchiyama and g 3 the operator given in Proposition 5.6. The operator f is given by

f ( X ) = I + B ( I + X ) 1 / 2 B + C ( I + X 1 ) 1 / 3 C + D X 1 / 4 D ,

resp.

f ( X ) = I + B ( I + X ) 1 / 2 B + C ( I + X 1 ) 1 / 3 C + D X 1 / 4 D .

It is not difficult to show that g is surjective, strictly super-homogeneous and inverse monotone, and that f is strictly 1 4 -concave, resp. ( 1 4 ) -convex and monotone, resp. antitone. Hence, the existence of the positive solution follows from Theorem 4.5. Using Algorithm 5, resp. Algorithm 5, we obtain the following approximate solution of Eq. (28), resp. Eq. (29), after 12 iterations

X 12 = 1.09659903724799 0.33165797383082 0.02892330222728 0.33165797383082 0.84635677329539 0.09427312566765 0.02892330222728 0.09427312566765 0.59229346812813 ,

resp.

X 12 = 1.14822024828508 0.35572457927850 0.05666726902198 0.35572457927850 0.89667499030598 0.15669691360091 0.05666726902198 0.15669691360091 0.61401608713953 ,

with the error estimation g ( X 12 ) f ( X 12 ) 2 = 1.4 × 10 12 , resp. g ( X 12 ) f ( X 12 ) 2 = 8.6 × 10 13 .

Finally, we show that Algorithm 5 may be applied to compute a positive root of polynomials of the form

(30) P ( x ) = a n x n + + a m + 1 x m + 1 a m x m a 1 x a 0 ,

where a i 0 for i = 0 , , n . It is well known that according to Descartes’ rule of signs, Eq. (30) has only one positive solution, provided at least two coefficients a i and a j are nonzeros, where 0 j m < i n and m , n are two given integers. Another proof of this result can be obtained by taking P = P g P f , where

P g ( x ) = a n x n + + a m + 1 x m + 1 and P f ( x ) = a m x m + + a 1 x + a 0 .

Proposition 5.14

Polynomial (30) has a positive root.

Proof

Since we look for the positive solution of P ( x ) = 0 , we pose y = x m + 1 for x ( 0 , + ) and substitute y 1 / ( m + 1 ) in place of x in Eq. (30). So, we can define f , g : ( 0 , + ) ( 0 , + ) as follows

f ( y ) = P f ( y 1 / ( m + 1 ) ) and g ( y ) = P g ( y 1 / ( m + 1 ) ) .

The function f is obviously strictly m m + 1 -concave. On the other hand, the function g is strictly increasing and bijective on ( 0 , + ) , which implies that the inverse of g is strictly increasing. Finally, it is not difficult to see that g is strictly super-homogeneous. Consequently, the result follows from Theorem 4.5.□

Example 5.15

We apply Algorithm 5 to compute the positive root of the polynomial

P ( x ) = 4 x 5 + 3 x 4 + 7 x 3 2 x 2 2 x 6 ,

we obtain the following results with respect to the iteration number n:

n x n P ( x n )
1 0.8767017517334639 7.298 × 10 1
5 0.8995646487790856 1.217 × 10 3
9 0.8996012127218075 2.000 × 10 6
13 0.8996012728107530 3.288 × 10 9
17 0.8996012729095380 5.405 × 10 12
21 0.8996012729097004 7.105 × 10 15

We conclude this work with the following questions that arise naturally:

Question 1: It is interesting to determine under which hypotheses the monotony results in Section 4 remain true in infinite dimensional Hilbert space.

Question 2: A huge amount of literature uses coincidence Picard iteration technique g x n + 1 = f x n to study the coincidence problem of f and g. The convergence of such sequences should be less sensitive to the computational approximations, unlike our method, which take into consideration the approximation constraints. Is it possible to apply or to extend our method to these coincidence problems.

Acknowledgement

This project was supported by the Deanship of Scientific Research at Prince Sattam Bin Abdulaziz University under the research project 2019/01/10332.

References

[1] Gerald Jungck, Commuting mappings and fixed points, Amer. Math. Monthly 83 (1976), no. 4, 261–263, 10.1080/00029890.1976.11994093.Search in Google Scholar

[2] Binayak S. Choudhury and Amaresh Kundu, A coupled coincidence point result in partially ordered metric spaces for compatible mappings, Nonlinear Anal. 73 (2010), no. 8, 2524–2531, 10.1016/j.na.2010.06.025.Search in Google Scholar

[3] Ismat Beg and Hemant Kumar Pathak, Coincidence point with application to stability of iterative procedure in cone metric spaces, Appl. Appl. Math. 13 (2018), no. 2, 1018–1038.Search in Google Scholar

[4] Zead Mustafa, M. M. M. Jaradat, Hassen Aydi, and Ahmad Alrhayyel, Some common fixed points of six mappings on Gb metric spaces using (E.A) property, Eur. J. Pure Appl. Math. 11 (2018), no. 1, 90–109, 10.29020/nybg.ejpam.v11i1.3133.Search in Google Scholar

[5] Faik Gürsoy, Abdul Rahim Khan, Müzeyyen Ertürk, and Vatan Karakaya, Coincidences of nonself operators by a simpler algorithm, Numer. Funct. Anal. Optim. 41 (2020), no. 2, 192–208, 10.1080/01630563.2019.1620770.Search in Google Scholar

[6] Mian Bahadur Zada and Muhammad Sarwar, Common fixed point theorems for rational Fℛ-contractive pairs of mappings with applications, J. Inequal. Appl. 2019 (2019), 11, 10.1186/s13660-018-1952-z.Search in Google Scholar

[7] Maher Berzig and Marwa Bouali, Coincidence point theorems in ordered Banach spaces and applications, Banach J. Math. Anal. 14 (2020), 539–558, 10.1007/s43037-019-00007-3.Search in Google Scholar

[8] Maher Berzig and Marwa Bouali, A coincidence point theorem and its applications to fractional differential equations, J. Fixed Point Theory Appl. 22 (2020), 56, 10.1007/s11784-020-00794-5.Search in Google Scholar

[9] Dajun Guo, The number of nontrivial solutions to Hammerstein nonlinear integral equations, Chin. Ann. Math. Ser. B 7 (1986), no. 2, 191–204.Search in Google Scholar

[10] Xuefeng Duan, Anping Liao, and Bin Tang, On the nonlinear matrix equation X−∑i=1mA1∗XδiAi=Q, Linear Algebra Appl. 429 (2008), no. 1, 110–121, 10.1016/j.laa.2008.02.014.Search in Google Scholar

[11] Maher Berzig and Bessem Samet, Positive fixed points for a class of nonlinear operators and applications, Positivity 17 (2013), no. 2, 235–255, 10.1007/s11117-012-0162-z.Search in Google Scholar

[12] Fengxia Zheng, Fixed point theorems for generalized concave operators and applications to fractional differential equation boundary value problems, SCIREA J. Math. 2 (2017), no. 3, 41–54.Search in Google Scholar

[13] Hui Wang, Lingling Zhang, and Xiaoqiang Wang, Fixed point theorems for a class of nonlinear sum-type operators and application in a fractional differential equation, Bound. Value Probl. 2018 (2018), 140, 10.1186/s13661-018-1059-y.Search in Google Scholar

[14] Shunyong Li and Chengbo Zhai, Positive solutions for a new class of Hadamard fractional differential equations on infinite intervals, J. Inequal. Appl. 2019 (2019), 150, 10.1186/s13660-019-2102-y.Search in Google Scholar

[15] Jing Li and Yuhai Zhang, The investigation on two kinds of nonlinear matrix equations, Bull. Malays. Math. Sci. Soc. 42 (2019), 3323–3341, 10.1007/s40840-018-0674-1.Search in Google Scholar

[16] Mitsuru Uchiyama and Morisuke Hasumi, On some operator monotone functions, Integral Equations Operator Theory 42 (2002), no. 2, 243–251, 10.1007/BF01275518.Search in Google Scholar

[17] Mitsuru Uchiyama, Inverse functions of polynomials and orthogonal polynomials as operator monotone functions, Trans. Amer. Math. Soc. 355 (2003), no. 10, 4111–4123, 10.1090/S0002-9947-03-03355-5.Search in Google Scholar

[18] Takayuki Furuta, Operator monotone functions, a > b ≥ 0 and a > log b, J. Math. Inequal. 7 (2013), no. 1, 93–96, 10.7153/jmi-07-08.Search in Google Scholar

[19] Vejdi I. Hasanov, Positive definite solutions of the matrix equations X±A∗X−qA=Q, Linear Algebra Appl. 404 (2005), 166–182, 10.1016/j.laa.2005.02.024.Search in Google Scholar

[20] Jie Meng and Hyun-Min Kim, The positive definite solution of the nonlinear matrix equation Xs−A∗X−tA=Q, Numer. Funct. Anal. Optim. 39 (2018), no. 4, 398–412, 10.1080/01630563.2017.1369108.Search in Google Scholar

[21] Janak Raj Sharma and Deepak Kumar, A fast and efficient composite Newton-Chebyshev method for systems of nonlinear equations, J. Complexity 49 (2018), 56–73, 10.1016/j.jco.2018.07.005.Search in Google Scholar

[22] Clemente Cesarano, Multi-dimensional Chebyshev polynomials: a non-conventional approach, Commun. Appl. Ind. Math. 10 (2019), no. 1, 1–19, 10.1515/caim-2019-0008.Search in Google Scholar

[23] Clemente Cesarano, Generalized special functions in the description of fractional diffusive equations, Commun. Appl. Ind. Math. 10 (2019), no. 1, 31–40, 10.1515/caim-2019-0010.Search in Google Scholar

[24] Rajendra Bhatia, Matrix Analysis, vol. 169, Springer-Verlag New York, 2013, 10.1007/978-1-4612-0653-8.Search in Google Scholar

[25] Rajendra Bhatia, Positive Definite Matrices, vol. 24, Princeton University Press, 2009, 10.1515/9781400827787.Search in Google Scholar

[26] Eskandar Ameer, Muhammad Nazam, Hassen Aydi, Muhammad Arshad, and Nabil Mlaiki, On (λ,y,ℛ)-contractions and applications to nonlinear matrix equations, Mathematics 7 (2019), no. 5, 443, 10.3390/math7050443.Search in Google Scholar

Received: 2020-01-12
Revised: 2020-06-08
Accepted: 2020-06-22
Published Online: 2020-08-04

© 2020 Imed Kedim et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 26.4.2024 from https://www.degruyter.com/document/doi/10.1515/math-2020-0049/html
Scroll to top button