Next Article in Journal
On Semi-Classical Orthogonal Polynomials Associated with a Modified Sextic Freud-Type Weight
Next Article in Special Issue
Improved Iterative Solution of Linear Fredholm Integral Equations of Second Kind via Inverse-Free Iterative Schemes
Previous Article in Journal
Simpson’s Rule and Hermite–Hadamard Inequality for Non-Convex Functions
Previous Article in Special Issue
Iterative and Noniterative Splitting Methods of the Stochastic Burgers’ Equation: Theory and Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Some High-Order Iterative Methods for Nonlinear Models Originating from Real Life Problems

by
Malik Zaka Ullah
1,*,
Ramandeep Behl
1 and
Ioannis K. Argyros
2
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1249; https://doi.org/10.3390/math8081249
Submission received: 13 July 2020 / Revised: 21 July 2020 / Accepted: 28 July 2020 / Published: 31 July 2020

Abstract

:
We develop a sixth order Steffensen-type method with one parameter in order to solve systems of equations. Our study’s novelty lies in the fact that two types of local convergence are established under weak conditions, including computable error bounds and uniqueness of the results. The performance of our methods is discussed and compared to other schemes using similar information. Finally, very large systems of equations ( 100 × 100 and 200 × 200 ) are solved in order to test the theoretical results and compare them favorably to earlier works.

1. Introduction

A plenty of problems from Biology, Chemistry, Economics, Engineering, Mathematics, and Physics are converted to a mathematical expression of the following form
F ( u ) = 0 .
Here, F : Ω B B , is differentiable, B is a Banach space and Ω is nonempty and open. Closed form solutions are rarely found, so iterative methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16] are used converging to the solution u * .
In particular, we propose the following new scheme
y p = u p u p + F ( u p ) , u p ; F 1 F ( u p ) z p = u p λ u p + F ( u p ) , u p ; F 1 F ( u p ) + F ( y p ) ( 1 λ ) u p , y p ; F 1 F ( u p ) u p + 1 = z p z p + F ( z p ) , z p ; F 1 F ( z p ) ,
u 0 Ω is an initial point and λ R is a free parameter. In addition to this, [ · , · ; F ] : Ω × Ω ( B , B ) is a divided difference of order one.
We shall present two convergence analyses. Later, we present the advantages over other methods using similar information.

2. Local Convergence Analysis I

We assume that B = R . We use method (2) with standard Taylor expansions [9] for studying local convergence.
Theorem 1.
Suppose that mapping F is s sufficient differentiable on Ω, with u * Ω , a simple zero of F. We also consider that the inverse of F, F ( u * ) 1 ( B , B ) . Then, lim p u p = u * provided that u 0 is close enough to u * . Moreover, the convergence order is six.
Proof. 
Set ϵ p = u p u * and Q p = F ( u * ) p ! , where ( ϵ p ) γ = ( ϵ 1 , ϵ 2 , , ϵ k ) γ , ϵ p R p . We shall use some Taylor series expansions, first for F ( u p ) and F u p + F ( u p ) :
F ( u p ) = Q 1 ϵ p + Q 2 ϵ p 2 + O ( ϵ p 3 )
and
F u p + F ( u p ) = ( Q 1 + Q 1 2 ) ϵ p + ( 3 Q 1 Q 2 + Q 2 + Q 1 2 Q 2 ) ϵ p 2 + O ( ϵ p 3 ) ,
respectively.
By using the expressions (3) and (4) in the first substep of scheme (2), we have
ϵ p ^ = y p u * = b 1 ϵ p 2 + b 2 ϵ p 3 + b 3 ϵ p 4 + O ( e p 5 ) ,
where
b 1 = Q 2 Q 1 + Q 2 , b 2 = 2 Q 3 Q 1 2 Q 2 2 Q 1 2 Q 2 3 Q 1 2 + Q 1 Q 3 + 3 Q 3 Q 2 2 and b 3 = 3 Q 2 3 Q 1 2 Q 1 Q 2 Q 3 + Q 2 3 + Q 1 2 Q 4 + 4 Q 1 Q 4 + 6 Q 4 7 Q 2 Q 3 + 3 Q 4 Q 1 + 4 Q 2 3 Q 1 2 + 5 Q 3 3 Q 1 2 10 Q 2 Q 3 Q 1 .
Secondly, we expand F ( y p )
F ( y p ) = Q 1 ϵ p ^ + Q 2 ϵ p ^ 2 + O ( ϵ p ^ 3 ) .
In view of (3)–(6), we get in the second substep of scheme (2)
ϵ p ¯ = u p + 1 u * = z p u * = b 4 ϵ p 3 + O ( ϵ p 4 ) ,
where
b 4 = 3 Q 2 2 Q 1 + 2 Q 2 2 Q 1 2 + Q 2 2 λ 4 Q 2 2 Q 1 + Q 2 2 + 3 Q 2 2 Q 1 2 .
Thirdly, we need the expansions for F ( z p ) and F ( z p + F ( z p ) )
F ( z p ) = Q 1 e p ¯ + Q 2 e p ¯ 2 + O ( e p ¯ 3 ) ,
Hence, by (5) and (8), we get
F ( z p + F ( z p ) ) = b 5 e p ¯ + b 6 e p ¯ 2 + O ( e p ¯ 3 ) ,
leading together with the third substep of method (2) to
e p + 1 = u p + 1 u * = b 7 e p 6 + O ( e p 7 ) ,
where
b 5 = Q 1 + Q 1 2 , b 6 = 3 Q 1 Q 2 + Q 2 + Q 1 2 Q 2 and b 7 = Q 2 + Q 2 Q 1 3 Q 2 2 Q 1 + 2 Q 2 2 Q 1 + Q 2 2 λ 4 Q 2 2 Q 1 + 3 Q 2 2 Q 1 + Q 2 2 .
 ☐
According to Theorem 1, the applicability of method (2) is limited to mappings F with derivatives up to the seventh order.
Now, we choose B = R , Ω = [ 3 2 , 1 2 ] and define a function f, as follows:
f ( ξ ) = ξ 3 ln ξ 2 + ξ 5 ξ 4 , ξ 0 0 , ξ = 0 .
We have the following derivatives of function f
f ( ξ ) = 3 ξ 2 ln ξ 2 + 5 ξ 4 4 ξ 3 + 2 ξ 2 , f ( ξ ) = 12 ξ ln ξ 2 + 20 ξ 3 12 ξ 2 + 10 ξ , f ( ξ ) = 12 ln ξ 2 + 60 ξ 2 12 ξ + 22 .
However, f ( ξ ) is not bounded on Ω , so Section 2, cannot be used. In this case, we have a more general alternative given in the up coming section.

3. Local Convergence Analysis II

Consider a 0 and b > 0 . Let w 0 : [ 0 , ) × [ 0 , ) [ 0 , ) be a increasingly continuous map with w 0 ( 0 , 0 ) = 0 .
Suppose equation
w 0 ( a t , t ) = 1
has ρ 1 as the smallest positive zero. In addition, we assume that w : [ 0 , ρ 1 ) × [ 0 , ρ 1 ) [ 0 , ) is a increasingly continuous map with w ( 0 , 0 ) = 0 .
Consider functions g 1 and h 1 defined on semi open interval [ 0 , ρ 1 ) as follow:
g 1 ( t ) = w ( b t , t ) 1 w 0 ( a t , t ) , and h 1 ( t ) = g 1 ( t ) 1 .
By these definitions, we have h 1 ( 0 ) = 1 and h 1 ( t ) as t ρ 1 . Subsequently, the intermediate value theorem assures that function h 1 has minimum one solution in ( 0 , ρ 1 ) . Let r 1 be the minimal such zero.
The expression
w 0 ( t , g 1 ( t ) t ) = 1
has the smallest positive zero ρ 2 . Set ρ 3 = min { ρ 1 , ρ 2 } .
We construe the functions g 2 and h 2 on interval [ 0 , ρ 3 ) in the following way
g 2 ( t ) = g 1 ( t ) + b | 1 λ | w b t , 1 + g 1 ( t ) t g 1 ( t ) 1 w 0 ( a t , t ) 1 w 0 t , g 1 ( t ) t , and h 2 ( t ) = g 2 ( t ) 1 .
We yield h 2 ( 0 ) = 1 and h 2 ( t ) since t ρ 3 . The r 2 stand for the minimal such zero of function h 2 on ( 0 , ρ 3 ) .
The equation
w 0 a g 2 ( t ) t , g 2 ( t ) t = 1
has ρ 4 as the smallest positive solution. Set ρ = min { ρ 3 , ρ 4 } . Define functions g 3 and h 3 on [ 0 , ρ ) as
g 3 ( t ) = w b g 2 ( t ) t , g 2 ( t ) t g 2 ( t ) 1 w 0 a g 2 ( t ) t , g 2 ( t ) t , and h 3 ( t ) = g 3 ( t ) 1 .
We obtain h 3 ( 0 ) = 1 and h 3 ( t ) as t ρ . The r 3 imply the minimal zero of h 3 on ( 0 , ρ ) . Moreover, define
r = min { r i } , for i = 1 , 2 , 3 .
Accordingly, we have
0 w 0 ( a t , t ) < 1 ,
0 w 0 t , g 1 ( t ) t < 1 ,
0 w 0 a g 2 ( t ) t , g 2 ( t ) t < 1 ,
and
0 g i ( t ) < 1 ,
for all t [ 0 , r ) .
S ( v , c ) denotes the open ball centered at v B and of radius c > 0 . By S ¯ ( v , c ) , we denote the closure of S ( v , c )
We use the following conditions ( A ) in order to study the local convergence:
(a1) 
F : Ω B is a differentiable operator in the Fréchet sense, [ · , · ; F ] : Ω × Ω ( B , B ) is a divided difference of order one. In addition to this, we assume that u * Ω is a simple zero of F. At last, the inverse of operator F, F ( u * ) 1 ( B , B ) .
(a2) 
Let w 0 : [ 0 , ] × [ 0 , ) [ 0 , ) be a increasingly continuous function with w 0 ( 0 , 0 ) = 0 , parameters a 0 and b > 0 , such that for each u , y Ω
F ( u * ) 1 [ u , y ; F ] F ( u * ) w 0 ( u u * , y u * ) , I + [ u , u * ; F ] a , and [ u , u * ; F ] b .
Set Ω 0 = Ω S ( u * , ρ 1 ) , where ρ 1 exists and is given by (12).
(a3) 
We assume that w : [ 0 , ρ 1 ) × [ 0 , ρ 1 ) [ 0 , ) is a increasingly continuous x , y , ζ , η Ω 0
F ( u * ) 1 [ u , y ; F ] [ ζ , η ; F ] w ( u ζ , y η ) .
(a4) 
S ¯ ( u * , R ) Ω , where R = max { r , a r , b r } , r is defined in (15) and ρ 2 , ρ 4 exist and are given by (13) and (13), respectively.
(a5) 
There exists r ¯ r , such that
w 0 ( 0 , r ¯ ) < 1 or w 0 ( r ¯ , 0 ) < 1 .
Set Ω 1 = Ω S ¯ ( u * , r ¯ ) .
Theorem 2.
Under the hypotheses ( A ) further consider that u 0 S ( u * , r ) { u * } . Accordingly, the proceeding assertions hold
{ u p } S ( u * , r ) ,
lim p { u p } = u * ,
y p u * g 1 ( u p u * ) u p u * u p u * < r ,
z p u * g 2 ( u p u * ) u p u * u p u * ,
and
u p + 1 u * g 3 ( u p u * ) u p u * u p u * .
In addition, the u * is the unique solution of F ( u ) = 0 in the set Ω 1 mentioned in hypothesis ( a 5 ) .
Proof. 
We first show items (20)–(24) by adopting mathematical induction. Because p S ( u * , r ) { u * } hold and by condition ( a 2 ) , we have
p + F ( p ) u * = ( I + [ p , u * ; F ] ) ( p u * ) I + [ p , u * ; F ] p u * a p u *
and
F ( p ) = F ( p ) F ( u * ) [ p , u * ; F ] ( p u * ) [ p , u * ; F ] p u * b p u *
so p + F ( p ) u * and F ( p ) belong in S ¯ ( u * , R ) . Afterwards, for u , y , S ( u * , r ) { u * } , and
F ( u * ) 1 [ u , y ; F ] F ( u * ) w 0 u u * , y u * w 0 ( r , r ) < 1 ,
so the Banach lemma on invertible operators [3,4,5,12] gives [ u , y ; F ] 1 ( B , B ) , and
[ u , y ; F ] 1 F ( u * ) 1 1 w 0 u 0 u * , y u * .
It also follows that y 0 is defined.
Adopting (15), (16), (19) (for i = 1 ), ( a 2 ) , ( a 3 ) , (25) and y 0 , we get
y 0 u * = u 0 u * [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u 0 ) = [ u 0 + F ( u 0 ) , u 0 ; F ] 1 [ u 0 + F ( u 0 ) , u 0 ; F ] [ u 0 , u * ; F ] ( u 0 u * ) [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u * ) F ( u * ) 1 [ u 0 + F ( u 0 ) , u 0 ; F ] [ u 0 , u * ; F ] u 0 u * w F ( u * ) , u 0 u * u 0 u * 1 w 0 ( a u 0 u * , u 0 u * ) w b u 0 u * , u 0 u * u 0 u * 1 w 0 ( a u 0 u * , u 0 u * ) = g 1 ( u 0 u * ) | u 0 u * < u 0 u * < r ,
so y 0 S ( u * , r ) (for y 0 u * ) and (22) holds for n = 0 .
F ( u * ) [ u 0 , y 0 ; F ] F ( u * ) w 0 u 0 u * , y 0 u * w 0 u 0 u * ) , g 1 ( u 0 u * ) u 0 u * w 0 r , g 1 ( r ) r < 1 ,
so [ u 0 , y 0 ; F ] 1 ( B , B ) and
[ u 0 , y 0 ; F ] 1 F ( u * ) 1 1 w 0 u 0 u * , g 1 ( u 0 u * ) u 0 u * .
It also follows that z 0 is well defined by the second substep of method (2) for n = 0 . In particular, we have
z 0 = u 0 λ [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u 0 ) + F ( y 0 ) ( 1 λ ) [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u 0 ) = u 0 [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u 0 ) + [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u 0 ) λ [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u 0 ) ( 1 λ ) [ u 0 , y 0 ; F ] 1 F ( u 0 ) λ [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( y 0 ) = y 0 + ( 1 λ ) [ u 0 + F ( u 0 ) , u 0 ; F ] 1 [ u 0 , y 0 , F ] 1 F ( u 0 ) λ [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( y 0 )
Next, by (15), (19) (for i = 2 ) and (25)–(28), we get, in turn, that
z 0 u 0 y 0 u * + | 1 λ | [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u * ) F ( u * ) 1 [ u 0 , y 0 ; F ] [ u 0 + F ( u 0 ) , u 0 ; F ] × [ u 0 , y 0 ; F ] 1 F ( u * ) F ( u * ) 1 [ u 0 , u * ; F ] u 0 u * 1 w 0 a u 0 u * , u 0 u * 1 w 0 u 0 u * , g 1 ( u 0 u * ) u 0 u * + | λ | [ u 0 + F ( u 0 ) , u 0 ; F ] 1 F ( u * ) F ( u * ) 1 [ y 0 , u * ; F ] y 0 u * [ g 1 ( u 0 u * ) + b | 1 λ | w b u 0 u * , ( 1 + g 1 ( u 0 u * ) ) u 0 u * 1 w 0 a u 0 u * , u 0 u * 1 w 0 u 0 u * , g 1 ( u 0 u * ) u 0 u * + | λ | b g 1 ( u 0 u * ) 1 w 0 ( a u 0 u * , u 0 u * ) ] u 0 u * = g 2 ( u 0 u * ) u 0 u * u 0 u * ,
so z 0 S ( u * , r ) (for z 0 u * ) and (23) holds for p = 0 .
We have by (15), (18) and (29)
F ( u * ) 1 [ z 0 + F ( u 0 ) , z 0 ; F ] F ( u * ) w 0 b z 0 u * , z 0 u * w 0 b g 2 ( u 0 u * ) u 0 u * , g 2 ( u 0 u * ) u 0 u * w 0 b g 2 ( r ) r , g 2 ( r ) r < 1
Accordingly, [ z 0 + F ( z 0 ) , z 0 ; F ] 1 ( B , B ) and
[ z 0 + F ( z 0 ) , z 0 ; F ] 1 F ( u * ) 1 1 w 0 b g 2 ( u 0 u * ) u 0 u * , g 2 ( u 0 u * ) u 0 u * .
It also follows that u 1 is well defined by (30) and the last substep of method (2) for n = 0 . Then, as in (25) and (26) (for z = 3 ) and (30), we obtain in turn
u 1 u * w ( b z 0 u * , z 0 u * ) z 0 u * 1 w 0 ( a z 0 u * , z 0 u * ) w b g 2 ( u 0 u * ) u 0 u * , g 2 ( u 0 u * ) u 0 u * g 2 ( u 0 u * ) u 0 u * 1 w 0 a g 2 ( u 0 u * ) u 0 u * , g 2 ( u 0 u * ) u 0 u * = g 3 ( u 0 u * ) u 0 u * u 0 u * ,
so, u 1 S ( u * , r ) (for u 1 u * ) and (24) holds for n = 0 . Subsequently, substituting u 0 , y 0 , z 0 , u 1 by u m , y m , z m , u m + 1 , respectively. Hence, the induction for (30) and (22)–(24) is complete. Using the estimation
u m + 1 u * < α u m u * < r ,
where α = g 3 ( u 0 u * ) [ 0 , 1 ] , we deduce that lim m u m = u * and u m + 1 S ( u * , r ) .
Finally, we want to illustrate that the required solution is unique. Therefore, let T = [ u * , y * ; F ] for y * Ω 1 , so that F ( y * ) = 0 . Then, by ( a 2 ) and ( a 5 ) , we get
F ( u * ) 1 ( T F ( u * ) ) w 0 ( 0 , u * y * ) w 0 ( 0 , r ¯ ) < 1 ,
so T 1 ( B , B ) . Finally, u * = y * is deduced from 0 = F ( u * ) F ( y * ) = T ( u * y * ) . ☐
Remark 1.
Another way of defining functions g i , h i and radii r i is as follows:
Let α = max { ı , a } , i = 1 , 2 , 3 . Subsequently, as in (12)–(18), we shall have instead:
Suppose that equation
w 0 ( α t , t ) = 1
has a smallest positive solution ρ ¯ 1 . Let w ¯ : [ 0 , ρ ¯ 1 ] × [ 0 , ρ ¯ 1 ] be a increasingly continuous function with w ¯ ( 0 , 0 ) = 0 .
Let functions g ¯ 1 and h ¯ 1 be defined in the interval [ 0 , ρ ¯ 1 ] by
g ¯ 1 ( t ) = w ¯ ( b t , t ) 1 w 0 ( α t , t ) a n d h ¯ 1 ( t ) = g ¯ 1 ( t ) 1 .
The r ¯ 1 stands for the smallest positive root of h ¯ 1 ( t ) = 0 in ( 0 , ρ ¯ 1 ) . Moreover, define functions g ¯ 2 , g ¯ 3 , h ¯ 2 and h ¯ 3 on the closed interval [ 0 , ρ ¯ 1 ] , as follows:
g ¯ 2 ( t ) = g ¯ 1 ( t ) + b | 1 λ | w b t , 1 + g ¯ 1 ( t ) t g ¯ 1 ( t ) 1 w 0 ( α t , t ) 2 , g ¯ 3 ( t ) = w b g ¯ 2 ( t ) t , g ¯ 2 ( t ) t g ¯ 2 ( t ) 1 w 0 ( α t , t ) , h ¯ 2 ( t ) = g ¯ 2 ( t ) 1 a n d h ¯ 3 ( t ) = g ¯ 3 ( t ) 1 .
The r ¯ 2 and r ¯ 3 serve as the minimal positive roots of h ¯ 2 ( t ) = 0 and h ¯ 3 ( t ) = 0 on closed interval [ 0 , ρ ¯ 1 ] , respectively. Subsequently, Theorem 2 can be written by using the "bar" conditions and functions, with r ¯ = min { r ¯ i } .
Remark 2.
The convergence of method (2) to u * is established under the conditions of Theorem 1. However, the order convergence under the conditions of Theorem 2 can be established by using (COC) and (ACOC) (for the details, please see Section 5).

4. Numerical Examples

Here, we monitor the convergence conditions on three problems (1)–(3). We choose [ u , y ; F ] = 0 1 F ( y + θ ( u y ) ) d θ in the examples. We can confirm the verification the hypotheses of Theorem 2 for the given choices of the w " functions and parameters a and b.
Example 1.
Here, we investigate the application of our results on Hammerstein integral equations (see [9], pp. 19–20) for B = C [ 0 , 1 ] as follows:
F ( u ( s 1 ) ) = u ( s 1 ) 1 5 0 1 S ( s 1 , s 2 ) u ( s 2 ) 3 d s 2 = 0 , u C [ 0 , 1 ] , s 1 , s 2 [ 0 , 1 ] ,
where
S ( s 1 , s 2 ) = s ( 1 s 2 ) , s s 2 , ( 1 s ) s 2 , s 2 s .
We use 0 1 ϕ ( t ) d t k = 1 8 w k ϕ ( t k ) in (34), where t k and w k are the abscissas and weights, respectively. Using u ( t j ) for u j ( j = 1 , 2 , 3 , , 8 ) , leads to
5 u i 5 k = 1 8 a j k u k 3 = 0 , j = 1 , 2 , 3 , 8 ,
a j k = w k t k ( 1 t j ) , k j , w k t j ( 1 t k ) , j < k .
The values of t k and w k when k = 8 , are illustrated in Table 1. Subsequently, we have
u * = ( 1.002096 , 1.009900 , 1.019727 , 1.026436 , 1.026436 , 1.019727 , 1.009900 , 1.002096 ) T .
Accordingly, we set w 0 ( s 1 , s 2 ) = w ( s 1 , s 2 ) = 3 80 ( s 1 + s 2 ) , a = 163 80 and b = 83 80 . The radii for Example 1 are listed in Table 2 and Table 3:
Example 2.
Here, we choose as integral equation [17,18], for B = C [ 0 , 1 ] as
F ( μ ) ( γ 1 ) = μ ( γ 1 ) 0 1 G ( γ 1 , γ 2 ) μ ( γ 2 ) 3 2 + μ ( γ 2 ) 2 2 d γ 2 = 0 ,
where
G ( γ 1 , γ 2 ) = ( 1 γ 2 ) γ 2 , γ 2 γ 1 , γ 1 ( 1 γ 2 ) , γ 1 γ 2 .
Because B = C [ 0 , 1 ] so, F : C [ 0 , 1 ] C [ 0 , 1 ] is given as
F ( μ ) ( γ 1 ) = μ ( γ 1 ) 0 γ 1 G ( γ 1 , γ 2 ) μ ( γ 2 ) 3 2 + μ ( γ 2 ) 2 2 d γ 2 .
We get
0 γ 1 G ( γ 1 , γ 2 ) d γ 2 1 8 .
Moreover,
F ( μ ) η ( γ 1 ) = η ( γ 1 ) 0 γ 1 G ( γ 1 , γ 2 ) 3 2 μ ( γ 2 ) 1 2 + μ ( γ 2 ) η ( γ 2 ) d γ 2 ,
so μ * ( γ 1 ) = 0 , since F ( μ * ( γ 1 ) ) = I ,
F ( μ * ) 1 F ( μ ) F ( η ) 1 8 3 2 μ η 1 2 + μ η .
Hence, we have
w 0 ( s , t ) = w ( s , t ) = 1 16 3 2 ( s + t ) + s + t , a = 53 16 , and b = 37 16 .
Therefore, our results can be utilized even though F is not bounded on Ω. The radii for Example 2 are given in Table 4.
Example 3.
We assume the following differential equations
q 1 ( μ ) q 1 ( μ ) 1 = 0 q 2 ( η ) ( e 1 ) η 1 = 0 q 3 ( θ ) 1 = 0
characterizes the progress/movement of a molecule in 3D with ( μ , η , θ ) Ω for q 1 ( 0 ) = q 2 ( 0 ) = q 3 ( 0 ) = 0 . The required solution v = ( μ , η , θ ) T describes to K : = ( q 1 , q 2 , q 3 ) : Ω R 3 given as
K ( v ) = e μ 1 , e 1 2 η 2 + η , θ T = 0 .
It follows from (41) that
K ( v ) = e μ 0 0 0 ( e 1 ) η + 1 0 0 0 1 ,
which yields
w 0 ( s , t ) = 1 2 ( e 1 ) ( s + t ) , w ( s , t ) = 1 2 e ( s + t ) , a = 1 2 ( e + 3 ) , and b = 1 2 ( e + 1 ) .
We depicted the radii of Example 3 in Table 5 and Table 6.
Example 4.
By the example of Section 2, for Ω = B = R , f ( ξ ) = 0 , we get
w 0 ( s , t ) = w ( s , t ) = 96.66297 2 ( s + t ) , a = 5 2 , and b = 3 2 .
The radii of method (2) for Example 4 are listed in Table 7 and Table 8.

5. Applications with Large Systems

We choose λ = 0 , λ = 0.5 and λ = 1 in our scheme (2), called by ( P S 1 ) , ( P S 2 ) and ( P S 3 ) , respectively. Now, we compare our schemes with a 6th-order iterative methods suggested by Abbasbandy et al. [19] and Hueso et al. [20], among them we picked the methods (8) and (14–15) for t 1 = 9 4 and s 2 = 9 8 , respectively, known as ( A S ) and ( H S ) . Moreover, a comparison of them has been done with the 6th-order iterative methods given by Wang and Li [21], among their method we chose expression (6), denoted by and ( W S ) . At the last, we contrast (2) with sixth-order scheme given by Sharma and Arora [22], we pick expression (13), known as ( S M ) . The details of all the iterative expressions are given, as follows:
method A S :
y j = u j 2 3 F ( u j ) 1 F ( u j ) , z j = u j I + 21 8 F ( u j ) 1 F ( y j ) 9 2 F ( u j ) 1 F ( y j ) 2 + 15 8 F ( u j ) 1 F ( y j ) 3 F ( u j ) 1 F ( u j ) , u j + 1 = z j 3 I 5 2 F ( u j ) 1 F ( y j ) + 1 2 F ( u j ) 1 F ( y j ) 2 F ( u j ) 1 F ( z j ) .
scheme H S :
y j = u j F ( u j ) 1 F ( u j ) , H ( u j , y j ) = F ( u j ) 1 F ( y j ) , H ( y j , u j ) = F ( y j ) 1 F ( u j ) , G s ( u j , y j ) = s 1 I + s 2 H ( y j , u j ) + s 3 H ( u j , y j ) + s 4 H ( y j , u j ) 2 , z j = u j G s ( u j , y j ) F ( u j ) 1 F ( u j ) , u j + 1 = z j .
where s 1 s 2 , s 3 and s 4 are real numbers.
terative method W S :
y j = u j F ( u j ) 1 F ( u j ) , z j = y j 2 I F ( u j ) 1 F ( y j ) F ( u j ) 1 F ( y j ) , u j + 1 = z j 2 I F ( u j ) 1 F ( y j ) F ( u j ) 1 F ( z j ) .
scheme S M :
y j = u j 2 3 F ( u j ) 1 F ( u j ) , z j = u j p I + F ( u j ) 1 F ( y j ) q I + r F ( u j ) 1 F ( y j ) F ( u j ) 1 F ( u j ) , u j + 1 = z j 5 2 I 3 2 F ( u j ) 1 F ( y j ) F ( u j ) 1 F ( z j ) ,
where p = 23 8 , q = 3 and r = 9 8 .
The ( j ) , ( F ( u j ) ) , u j + 1 u j , and ρ * log u j + 1 u j / u j u j 1 log u j u j 1 / u j 1 u j 2 stands for index of iteration, absolute residual errors in the function F, error between two successive iterations and computational convergence order, receptively. There values are listed in Table 9, Table 10 and Table 11. Moreover, the quantity η is the final obtained value of u j + 1 u j u j u j 1 6 .
The estimation of all the above parameters have been calculated by Mathematica-9. For minimizing the round-off errors, we have chosen multiple precision arithmetic with 1000 digits of mantissa. The term b 1 ( ± b 2 ) symbolizes the b 1 × 10 ( ± b 2 ) in all mentioned tables. We adopted the command "AbsoluteTiming[]" in order to calculate the CPU time. We run our programs three times and depicted the average CPU time in Table 12, also one can observe the times used for each iterative method, where we want to point out that for big size problems the method P S 1 uses the minimum time, so it is being very competitive. The configuration of the used computer is given below:
Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60 GHz
Made: HP
RAM: 8:00 GB
System type: 64-bit-Operating System, x64-based processor.
Example 5.
Here, we deal with a boundary value problem from Ortega and Rheinboldt [9], given by
y = y 3 + 6 y + 1 2 3 2 x , y ( 0 ) = 0 , y ( 1 ) = 1 .
We assume
u 0 = 0 < u 1 < u 2 < u 3 < < u p , w h e r e u p + 1 = u p + h , h = 1 p ,
partition of the interval [ 0 , 1 ] and y 0 = y ( u 0 ) = 0 , y 1 = y ( u 1 ) , , y n 1 = y ( u n 1 ) , y p = y ( u p ) = 1 .
Now, we discretize expression (46) by adopting following numerical formula for derivatives
y j = y j + 1 y j 1 2 h , y j = y j 1 2 y j + y j + 1 h 2 , j = 1 , 2 , , p 1 ,
which leads to
y j + 1 2 y j + y j 1 h 2 2 y j 3 3 2 h ( y k + 1 y k 1 ) 3 2 u j h 2 1 h 2 = 0 , j = 1 , 2 , , p 1 ,
( p 1 ) × ( p 1 ) system of nonlinear equations.
For specific value of p = 7 , we have a 6 × 6 system and the required solution is
u * = 0.07654393 , 0.1658739 , 0.2715210 , 0.3984540 , 0.5538864 , 0.7486878 T .
The computational estimations are listed in Table 9 on the basis of initial approximation y j ( 0 ) = 3 2 , 3 2 , 3 2 , 3 2 , 3 2 , 3 2 T .
Example 6.
The classical 2D Bratu problem [23,24] is given by
u μ μ + u θ θ + C e u = 0 , Ω = ( μ , θ ) 0 μ 1 , 0 θ 1 , w i t h b o u n d a r y h y p o t h e s e s u = 0 o n Ω .
By adopting finite difference discretization, we can deduced the above PDE (48) to a nonlinear system. For this purpose, we denote Δ i , j = u ( μ i , θ j ) as numerical solution at the grid points of the mesh. In addition to this, M 1 and M 2 stand for the number of steps in the directions of μ and θ, respectively. The h and k called as the respective step sizes in the directions of μ and θ. Adopt the following central difference formula to u μ μ and u θ θ
u μ μ ( u i , θ j ) = Δ i + 1 , j 2 Δ i , j + Δ i 1 , j h 2 , C = 0.1 , θ [ 0 , 1 ] ,
leads to us
Δ i , j + 1 + Δ i , j 1 Δ i , j + Δ i + 1 , j + Δ i 1 , j + h 2 C exp Δ i , j i = 1 , 2 , 3 , , M 1 , j = 1 , 2 , 3 , , M 2
For obtaining a large system of 100 × 100 , we choose M 1 = M 2 = 11 , C = 0.1 and h = 1 11 . The numerical results are listed in Table 10 based on the initial guess u 0 = 0.1 sin ( π h i ) sin ( π h j ) T , i = j = 10 .
Example 7.
Let us consider the following nonlinear system
F ( x ) = u j 2 u j + 1 1 = 0 , 1 j p 1 , x p 2 u 1 1 = 0 .
For specific value p = 200 , we have 200 × 200 system, and chose the following starting point
x ( 0 ) = ( 1.25 , 1.25 , 1.25 , , 1.25 ) T .
The u * = ( 1 , 1 , 1 , , 1 ) T is the required solution of system 7. Table 11 provides the numerical results.
Remark 3.On the basis of Table 9, Table 10 and Table 11, we conclude that our methods namely P S 1 , P S 2 and P S 3 perform better in the contrast of existing schemes A S , H S , S M and S M on the basis of residual errors, errors between two consecutive iterations, and asymptotic error constant. In addition, our methods also demonstrate the stable computational order of convergence. Finally, we concluded that our methods not only perform better than existing methods in numerical results, but also take half of the CPU time in contrast to other existing methods (results can be easily found in Table 12).

6. Conclusions

We presented a new family of Steffensen-type methods with one parameter. The local convergence is studied in Section 2 while using Taylor expansion and derivative up to the order seven, when B = R j . To extend the suitability of these iterative methods, we only use hypotheses on the first derivative in Section 3 and Banach space valued operators. This way, we also find computable error bounds on u p u * as well as uniqueness results based on generalized Lipschitz-type real functions. Numerical examples of equations, favorable comparisons to other methods can be found in Section 4.

Author Contributions

M.Z.U.: Validation; Review & Editing, R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Research and development office (RDO) at the ministry of Education, Kingdom of Saudi Arabia. Grant no (HIQI-22-2019).

Acknowledgments

This project was funded by the research and development office (RDO) at the ministry of Education, Kingdom of Saudi Arabia. Grant No. (HIQI-22-2019). The authors also, acknowledge with thanks research and development office (RDO-KAU) at King Abdulaziz University for technical support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amat, S.; Bermudez, C.; Hernández-Verón, M.A.; Martínez, E. On an efficient k-step iterative method for nonlinear equations. J. Comput. Appl. Math. 2016, 302, 258–271. [Google Scholar] [CrossRef] [Green Version]
  2. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  3. Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Nova Publishers: New York, NY, USA, 2019; Volume III. [Google Scholar]
  4. Argyros, I.K.; Hillout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  5. Argyros, I.K.; Magrenan, A.A. A Contemporary Study of Iterative Methods; Academy Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2018. [Google Scholar]
  6. Cordero, A.; Torregrosa, J.R. Low-complexity root finding iteration functions with no derivatives of any order of convergence. J. Comput. Appl. Math. 2015, 275, 502–515. [Google Scholar] [CrossRef]
  7. Ezquerro, J.A.; Hernández, M.A. How to improve the domain of starting points for Steffensen’s method. Stud. Appl. Math. 2014, 132, 354–380. [Google Scholar] [CrossRef]
  8. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Pitman Advanced Publishing Program: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  9. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  10. Rheindoldt, W.C. An adaptive continuation process for solving systems of equations. Pol. Acad. Sci. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
  11. Sharma, J.R.; Ghua, R.K.; Sharma, R. An efficient fourth-order weighted Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–325. [Google Scholar] [CrossRef]
  12. Traub, J.F. Iterative Methods for the Solutions of Equations; American Mathematical Society: Providence, RI, USA, 1982. [Google Scholar]
  13. Zuníc, J.D.; Petkovíc, M.S. A cubically convergent Steffensenlike method for solving nonlinear equations. Appl. Math. Lett. 2012, 25, 1881–1886. [Google Scholar]
  14. Alarcón, V.; Amat, S.; Busquier, S.; López, D.J. A Steffensens type method in Banach spaces with applications on boundary-value problems. J. Comput. Appl. Math. 2008, 216, 243–250. [Google Scholar] [CrossRef]
  15. Behl, R.; Argyros, I.K.; Machado, J.A.T. Ball comparison between three sixth order methods for Banach space valued operators. Mathematics 2020, 8, 667. [Google Scholar] [CrossRef]
  16. Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected Topics in Numerical Analysis; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2010; ISBN 978-3-8433-6793-6. [Google Scholar]
  17. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  18. Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef] [Green Version]
  19. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287, 287–288. [Google Scholar] [CrossRef]
  20. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  21. Wang, X.; Li, Y. An Efficient Sixth Order Newton Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef] [Green Version]
  22. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  23. Kapania, R.K. A pseudo-spectral solution of 2-parameter Bratu’s equation. Comput. Mech. 1990, 6, 55–63. [Google Scholar] [CrossRef]
  24. Simpson, R.B. A method for the numerical determination of bifurcation states of nonlinear systems of equations. SIAM J. Numer. Anal. 1975, 12, 439–451. [Google Scholar] [CrossRef]
Table 1. Abscissas and weights for k = 8.
Table 1. Abscissas and weights for k = 8.
j t j w j
1 0.01985507175123188415821957 0.05061426814518812957626567
2 0.10166676129318663020422303 0.11119051722668723527217800
3 0.23723379504183550709113047 0.15685332293894364366898110
4 0.40828267875217509753026193 0.18134189168918099148257522
5 0.59171732124782490246973807 0.18134189168918099148257522
6 0.76276620495816449290886952 0.15685332293894364366898110
7 0.89833323870681336979577696 0.11119051722668723527217800
8 0.98014492824876811584178043 0.05061426814518812957626567
Table 2. Convergence radii for Example 1.
Table 2. Convergence radii for Example 1.
λ r 1 r 2 r 3 r
05.254523.872084.093013.87208
0.55.254524.260064.426024.26006
15.254525.254525.254525.25452
Table 3. Convergence radii for Example 1 with bar functions.
Table 3. Convergence radii for Example 1 with bar functions.
λ r 1 r 2 r 3 r
05.254523.677483.876263.67748
0.55.254524.073514.174134.07351
15.254525.254524.891624.89162
Table 4. Convergence radii for Example 2 with bar functions.
Table 4. Convergence radii for Example 2 with bar functions.
λ r 1 r 2 r 3 r
01.031370.5024030.612110.502403
0.51.031370.611990.707380.61199
11.031371.031371.031371.03137
Table 5. Convergence radii for Example 3.
Table 5. Convergence radii for Example 3.
λ r 1 r 2 r 3 r
00.13885960.9213750.0833560.083356
0.50.13885960.9213750.0862970.086297
10.13885960.13885960.13885960.1388596
Table 6. Convergence radii for Example 3 with bar functions.
Table 6. Convergence radii for Example 3 with bar functions.
λ r 1 r 2 r 3 r
00.13885960.04874710.12295510.0487471
0.50.13885960.04874710.13778150.0487471
10.13885960.13885960.13807800.1380780
Table 7. Convergence radii for Example 4.
Table 7. Convergence radii for Example 4.
λ r 1 r 2 r 3 r
00.003448410.002396120.002566230.00239612
0.50.003448410.002677690.002808070.00267769
10.003448410.003448410.003448410.00344841
Table 8. Convergence radii for Example 4 with bar functions.
Table 8. Convergence radii for Example 4 with bar functions.
λ r 1 r 2 r 3 r
00.003448410.002259550.002467650.00225955
0.50.003448410.002259550.002467650.00225955
10.003448410.003448410.003348910.00344841
Table 9. Comparisons of different methods on a Boundary value problem in Example 5.
Table 9. Comparisons of different methods on a Boundary value problem in Example 5.
Methodsj F ( u j ) u j + 1 u j ρ * u j + 1 u j u j u j 1 6
A S 1 1.0 ( 4 ) 6.1 ( 4 )
2 8.7 ( 27 ) 2.8 ( 26 ) 5.133234733 ( 7 )
3 8.1 ( 161 ) 2.6 ( 160 ) 5.9985 5.920693970 ( 7 )
H S 1 1.3 ( 4 ) 5.7 ( 4 )
2 8.1 ( 23 ) 2.8 ( 22 ) 8.252588019 ( 3 )
3 3.5 ( 114 ) 1.0 ( 113 ) 4.9954 2.013368332 ( + 16 )
W S 1 2.6 ( 4 ) 1.1 ( 3 )
2 1.1 ( 25 ) 3.1 ( 25 ) 1.977528884 ( 7 )
3 8.6 ( 155 ) 2.4 ( 154 ) 5.9957 2.448277731 ( 7 )
S M 1 7.3 ( 5 ) 2.7 ( 3 )
2 8.2 ( 29 ) 2.7 ( 28 ) 7.804847473 ( 7 )
3 1.1 ( 172 ) 3.6 ( 172 ) 5.9973 9.053257416 ( 7 )
P S 1 1 4.9 ( 6 ) 1.6 ( 5 )
2 1.6 ( 38 ) 4.8 ( 38 ) 2.474537279 ( 9 )
3 9.3 ( 234 ) 2.7 ( 233 ) 6.0010 2.302596208 ( 9 )
P S 2 1 1.1 ( 5 ) 3.7 ( 5 )
2 3.7 ( 36 ) 1.1 ( 35 ) 4.513404180 ( 9 )
3 2.4 ( 219 ) 7.2 ( 219 ) 6.0013 4.108378955 ( 9 )
P S 3 1 1.9 ( 5 ) 6.5 ( 5 )
2 1.9 ( 34 ) 5.6 ( 34 ) 7.168046437 ( 9 )
3 6.6 ( 209 ) 1.9 ( 208 ) 6.0016 6.434316717 ( 9 )
Table 10. Comparisons of different methods on two-dimensional (2D) Bratu problem in Example 6.
Table 10. Comparisons of different methods on two-dimensional (2D) Bratu problem in Example 6.
Methodsj F ( u j ) u j + 1 u j ρ * u j + 1 u j u j u j 1 6
A S 1 4.4 ( 15 ) 2.4 ( 14 )
2 6.9 ( 95 ) 3.5 ( 94 ) 1.428095547 ( 12 )
3 7.9 ( 574 ) 3.9 ( 573 ) 5.9994 1.973434769 ( 12 )
H S 1 2.1 ( 13 ) 1.2 ( 12 )
2 2.1 ( 71 ) 1.2 ( 70 ) 7.368055345 ( 11 )
3 1.7 ( 361 ) 9.3 ( 361 ) 4.9997 3.495510769 ( + 1 )
W S 1 5.0 ( 19 ) 2.9 ( 18 )
2 1.7 ( 122 ) 1.0 ( 121 ) 1.754949400 ( 16 )
3 3.1 ( 743 ) 1.8 ( 742 ) 5.9999 1.666475363 ( 16 )
S M 1 4.4 ( 15 ) 2.4 ( 14 )
2 7.1 ( 95 ) 3.6 ( 94 ) 1.433541371 ( 12 )
3 9.2 ( 574 ) 4.5 ( 573 ) 5.9994 1.433541371 ( 12 )
P S 1 1 9.1 ( 21 ) 5.3 ( 20 )
2 1.2 ( 134 ) 7.1 ( 134 ) 3.060974255 ( 18 )
3 6.9 ( 818 ) 4.0 ( 817 ) 6.0000 3.068006721 ( 18 )
P S 2 1 1.9 ( 20 ) 1.1 ( 19 )
2 1.7 ( 132 ) 1.0 ( 131 ) 6.095821945 ( 18 )
3 1.1 ( 804 ) 6.7 ( 804 ) 6.0000 6.105210728 ( 18 )
P S 3 1 3.1 ( 20 ) 1.8 ( 19 )
2 6.7 ( 131 ) 3.9 ( 130 ) 1.016575545 ( 17 )
3 6.3 ( 795 ) 3.7 ( 794 ) 6.0000 1.017779424 ( 17 )
Table 11. Comparisons of different methods on Example 7.
Table 11. Comparisons of different methods on Example 7.
Methodsj F ( u j ) u j + 1 u j ρ * u j + 1 u j u j u j 1 6
A S 1 5.2 ( 3 ) 1.7 ( 3 )
2 6.2 ( 21 ) 2.1 ( 21 ) 7.686036043 ( 5 )
3 1.7 ( 128 ) 5.8 ( 129 ) 6.0000 7.695242316 ( 5 )
H S 1 2.3 ( 3 ) 7.7 ( 4 )
2 5.4 ( 20 ) 1.8 ( 20 ) 8.659247536 ( 2 )
3 3.9 ( 103 ) 1.3 ( 103 ) 5.0000 3.689254113 ( + 15 )
W S 1 3.5 ( 3 ) 1.2 ( 3 )
2 4.0 ( 22 ) 1.3 ( 22 ) 5.299207889 ( 5 )
3 8.4 ( 136 ) 2.8 ( 136 ) 6.0000 5.303300859 ( 5 )
S M 1 3.0 ( 3 ) 1.0 ( 3 )
2 1.4 ( 22 ) 4.6 ( 23 ) 4.671758076 ( 5 )
3 1.3 ( 138 ) 4.3 ( 139 ) 6.0000 4.674761498 ( 5 )
P S 1 1 5.1 ( 3 ) 1.7 ( 3 )
2 8.4 ( 21 ) 2.8 ( 21 ) 1.130483172 ( 4 )
3 1.6 ( 127 ) 5.5 ( 128 ) 6.000 1.131370850 ( 4 )
P S 2 1 1.0 ( 1 ) 3.3 ( 2 )
2 4.0 ( 12 ) 1.3 ( 12 ) 9.906447117 ( 4 )
3 1.7 ( 74 ) 5.8 ( 75 ) 5.9989 1.018233765 ( 4 )
P S 3 1 3.3 ( 1 ) 1.1 ( 1 )
2 1.3 ( 8 ) 4.3 ( 9 ) 2.565472254 ( 3 )
3 5.6 ( 53 ) 1.9 ( 53 ) 5.9943 2.828427114 ( 3 )
Table 12. CPU time of different methods on Examples 5–7.
Table 12. CPU time of different methods on Examples 5–7.
MethodsExample 5Example 6Example 7Total TimeAverage Time
A S 0.465330210.079553356.906591567.451474189.1504913
H S 0.583412189.541919366.511753556.637084185.5456947
W S 0.274193128.377322182.956711311.608226103.8694087
S M 1.130812126.641140401.627979529.399931176.4666437
P S 1 0.101071120.09437052.204957172.40039857.46679933
P S 2 0.100071117.90119852.146903170.14817256.71605733
P S 3 0.100083117.92322751.972773169.99608356.665361
According to the CPU time, method P S 3 is taking the lowest time for executing the results. All of the other schemes A S , H S , S M ; and, S M consuming at least double CPU timing as compare to our methods namely P S 1 , P S 2 and P S 3 . So, we conclude that our methods provide results faster than the other existing methods.

Share and Cite

MDPI and ACS Style

Zaka Ullah, M.; Behl, R.; Argyros, I.K. Some High-Order Iterative Methods for Nonlinear Models Originating from Real Life Problems. Mathematics 2020, 8, 1249. https://doi.org/10.3390/math8081249

AMA Style

Zaka Ullah M, Behl R, Argyros IK. Some High-Order Iterative Methods for Nonlinear Models Originating from Real Life Problems. Mathematics. 2020; 8(8):1249. https://doi.org/10.3390/math8081249

Chicago/Turabian Style

Zaka Ullah, Malik, Ramandeep Behl, and Ioannis K. Argyros. 2020. "Some High-Order Iterative Methods for Nonlinear Models Originating from Real Life Problems" Mathematics 8, no. 8: 1249. https://doi.org/10.3390/math8081249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop