Abstract

In this article, we construct an optimal family of iterative methods for finding the single root and then extend this family for determining all the distinct as well as multiple roots of single-variable nonlinear equations simultaneously. Convergence analysis is presented for both the cases to show that the optimal order of convergence is 4 in the case of single root finding methods and 6 for simultaneous determination of all distinct as well as multiple roots of a nonlinear equation. The computational cost, basins of attraction, efficiency, log of residual, and numerical test examples show that the newly constructed methods are more efficient as compared to the existing methods in the literature.

1. Introduction

To solve a nonlinear equation,is the oldest problem of science, in general, and mathematics, in particular. The nonlinear equations have diverse applications in many areas of science and engineering. In general, to find the roots of (1), we look towards iterative schemes, which can be further classified as to approximate the single root and all roots of (1). In this article, we are going to work on both types of iterative methods. Many iterative methods of different convergence orders already exist in the literature (see [111]) to approximate the roots of (1). Ostrowski [12] defined the efficiency index I to classify these iterative methods in terms of their convergence order k and number of function evaluations or its derivatives per iteration, say i.e.,

An iterative method is said to be optimal according to the Kung–Traub conjecture [13], ifholds, where H is the optimal order of convergence. The aforementioned methods are used to approximate one root at a time. But, mathematicians are also interested in finding all the roots of (1) simultaneously. This is due to the fact that simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single root finding methods, and can be implemented for parallel computing as well. More detail on single as well as simultaneous determination of all roots can be found in [1, 1028] and the reference cited therein.

The most famous of the single root finding methods is the classical Newton–Raphson method:

Method (4) is optimal with an efficiency of 1.41 by the Traub conjecture. If we use Weierstrass’ correction [26]:in (4), then, we get classical Weierstrass–Dochev methods to approximate all roots of nonlinear equation (1) as

Method (6) has a convergence order 2. Later, Aberth [29] presented the 3rd order simultaneous method, given aswhere .

First of all, we construct a family of optimal fourth-order methods using the procedure of weight function and then convert it into simultaneous iterative methods for finding all distinct as well as multiple roots of nonlinear equation (1).

2. Construction of Methods and Convergence Analysis

King [9] presented the following optimal fourth-order method (abbreviated as E1):

Cordero et al. [7] gave the fourth-order optimal method as follows (abbreviated as E2):

Chun [4] in 2008 gave the fourth-order optimal method as follows (abbreviated as E3):

Maheshwari [11] gave the fourth-order optimal method as follows (abbreviated as E4):

Chun [3] gave the fourth-order optimal method as follows (abbreviated as E5):

Kou et al. [10] gave the fourth-order optimal method as follows (abbreviated as E6):

Behzad [5] give the fourth-order optimal method as follows (abbreviated as E7):

Chun [2] in 2006 gave the fourth-order optimal method as follows (abbreviated as E8):

Ostrowski [12] give the fourth-order optimal method as follows (abbreviated as E9):

Here, we propose the following two families of iterative methods:where and is a real number.

For the iteration schemes (17), we have the following convergence theorem by using CAS Maple 18 and error relation of the iterative schemes defined in (17) is found.

Theorem 1. Let be a simple root of a sufficiently differential able function in an open interval I. If is sufficiently close to and is a real-valued function satisfying , and , then the convergence order of the family of iterative method (17) is four and the error equation is given bywhere .

Proof. Let be a simple root of and . By Taylor’s series expansion of around taking , we getandDividing (19) by (20), we havesoNow,Expanding about the origin, we haveBy putting in (25), we haveHence, it proves the theorem.

2.1. The Concrete Fourth-Order Methods

We now construct some concrete forms of the family of methods described by (17). Let us take the function satisfying the conditions of Theorem 1.

Therefore, we get the following new three iterative methods with arbitrary constant and by choosing different weight functions given in Table 1:

Concrete method 1 (abbreviated as Q1):

Concrete method 2 (abbreviated as Q2):

Concrete method 3 (abbreviated as Q3):where and

2.2. Complex Dynamical Study of Families of Iterative Methods

Here, we discuss the dynamical study of iterative methods (Q1–Q3 and E1–E9). We investigate the regions, from where we take the initial estimates to achieve the roots of the nonlinear equation. We actually numerically approximate the domain of attractions of the roots as a qualitative measure, that is, how the iterative methods depend on the choice of initial estimates? To answer this question on the dynamical behavior of the iterative methods, we investigate the dynamics of the methods (Q1–Q3) and compare it with (E1–E9). Let us recall some basic concepts of this study in the background contexture of complex dynamics. For more details on the dynamical behavior of the iterative methods, one can consult [6, 30, 31]. Taking a rational function , where denotes the complex plane, the orbit is defined as a set such as . The convergence of is understood in the sense, if exists. A point is known as periodic with a minimal period if holds, where is the smallest positive integer. A periodic point for is known as fixed, attracting if , repelling if , and neutral otherwise. An attracting point defines the basin of attraction as the set of starting points whose orbit tends to . The closure of the set of its repelling periodic points of a rational map is known as the Julia set denoted by , and its complement is the Fatou set denoted by . The iterative methods when applied to find the roots of (1) provide the rational map. But, we are interested in the basins of attraction of the roots of nonlinear function (1). Fatou set contains the basins of attraction of different roots is a well-known fact. In general, the Julia set (a fractal and rational map) behaves as unstable in this region. For the dynamical and graphical point of view, we take grids of square . To each root of (1), we assign a color to which the corresponding orbit of the iterative method starts and converges to a fixed point. Take color map as Jet. We use as a stopping criteria, and the maximum number of iterations is taken as 20. We mark a dark blue point, if the orbit of the iterative method does not converge to the root after 20 iterations which means it has a distance greater than to any other root. Different color is used for different roots. Iterative methods have different basins of attraction distinguished by their colors. In basins, brightness in color represents the number of iterations to achieve the root of (1). Note that the darkest blue regions denote the lack of convergence to any root of (1). Finally, in Tables 25, we present the elapsed time of basins of attraction corresponding to iterative maps (Q1–Q3 and E1–E9) using the tic-toc command in code using MATLAB (R2011b). Figures 14 show the basins of attraction of iterative methods (Q1–Q3 and E1–E9) for nonlinear functions , , , and , respectively. By observing the basins of attraction, we can easily judge the stability of iterative methods (Q1–Q3 and E1–E9). Elapsed time, divergent regions, and brightness in color presents that Q1–Q3 is better than E1–E9.

Figures 1(a)–1(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 1(a)–1(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 2 shows the elapsed time of Q1–Q3 and E1–E9.

Figures 2(a)–2(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 2(a)–2(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 3 shows the elapsed time of Q1–Q3 and E1–E9.

Figures 3(a)–3(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 3(a)–3(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 4 shows the elapsed time of Q1–Q3 and E1–E9.

Figures 4(a)–4(l) present the basins of attraction of methods (Q1–Q3 and E1–E9) for a nonlinear function . In Figures 4(a)–4(l), brightness of the color in basins shows the less number of iterations for convergence of iterative methods. Table 5 shows the elapsed time of Q1–Q3 and E1–E9.

3. Generalization to Simultaneous Methods

Suppose nonlinear equation (1) has roots. Then, and can be approximated as

This implies

This gives the Aberth–Ehrlich method (7):

Now from (31), an approximation of is formed by replacing with as follows:

Using (33) in (4), we have the following method for finding all the distinct roots:

In case of multiple roots, we have the following method:where is the multiplicity of the root andin which and . Using correction , we get the following new family of simultaneous iterative methods for extracting multiple roots of nonlinear equation (1):

Thus, we have constructed new three simultaneous iterative methods (37) abbreviated as SM1–SM3.

3.1. Convergence Analysis

In this section, we discuss the convergence analysis of a family of simultaneous methods (SM1–SM3) which is given in form of the following theorem. Obviously, convergence for the method (34) will follow from the convergence of the method (SM1–SM3) from Theorem 2 when the multiplicities of the roots are one.

Theorem 2. Let be the n number of simple roots of nonlinear equation (1). If , , , …, be the initial approximations of the roots, respectively, and sufficiently close to actual roots, then the order of convergence of the method (SM1–SM3) equals six.

Proof. Let and be the errors in and approximations, respectively. Then, obviously for distinct rootsThus, for multiple roots, we have from (37) thatwhere from (18) and . Thus,If it is assumed that absolute values of all errors are of the same order, say , then from (40), we haveHence, the theorem is proved.

4. Computational Aspect

Here, we compare the computational efficiencies of the Petkovic et al. [21] method and the new methods (SM1–SM3). As presented in [21], the efficiency of an iterative method can be estimated using the efficiency index given bywhere is the computational cost and is the order of convergence of the iterative method. Arithmetic operations per iteration with certain weight depending on the execution time of operation are used to evaluate the computational cost . The weights used for division, multiplication, and addition plus subtraction are , respectively. For a given polynomial of degree and roots, the number of division, multiplication, and addition and subtraction per iteration for all roots are denoted by , and . The cost of computation can be calculated as

Thus, (42) becomes

Consider that the number of operations of a complex polynomial with real and complex roots reduces to the operation of real arithmetic, which is given in Table 6 as a polynomial degree m taking the dominant term of order . Applying (44) and the data given in Table 6, we calculate the percentage ratio [21] given bywhere is the Petkovic method [21] of order 4. Figures 5(a)–5(d) graphically illustrate these percentage ratios. Figures 5(a)–5(c) show the computational efficiency of methods (SM1–SM3) w.r.t the method , and Figure 5(d) shows the computational efficiency of the method PJ6 w.r.t (SM1–SM3). It is evident from Figures 5(a)–5(d) that the newly constructed simultaneous methods (SM1–SM3) are more efficient as compared to [21].

5. Numerical Results

Here, some numerical examples are considered in order to demonstrate the performance of our family of one-step fourth-order single root finding methods (Q1–Q3) and sixth-order simultaneous methods (SM1–SM3), respectively. We compare our family of optimal fourth-order single root finding methods (Q1–Q3) with E1–E9 methods. The family of simultaneous methods (SM1–SM3) of order six is compared with the method [21] of the same order. All the computations are performed using CAS Maple 18 with 9000 (64 digits floating point arithmetic in case of simultaneous methods) significant digits. For single root finding methods, the stopping criteria is as follows:whereasfor simultaneous methods. We take for the single root finding method and for simultaneous determination of all roots of nonlinear equation (1).

Numerical tests examples from [3234] are provided in Tables 713. In Tables 7, 9, 11, and 12, we present the numerical results for simultaneous determination of all roots, while Tables 8, 10, and 13 represent for single root finding methods. In all tables, CO represents the convergence order; , the number of iterations; the computational order of convergence; and CPU, the computational time in seconds. Table 14 shows the values of arbitrary parameters and used in iterative methods Q1–Q3 for test Examples 13.

We also calculate the CPU execution time, as all the calculations are done using Maple 18 (Processor Intel(R) Core(TM) i3-3110m [email protected] GHz with 64-bit operating system). We observe from tables that CPU time of the methods SM1–SM3 is comparable or better than that of the method [21], showing the efficiency of our methods (SM1–SM3) as compared to them.

6. Applications in Engineering

In this section, we consider two examples from engineering.

Example 1. (beam designing model, see [34]).
An engineer considers that a problem of embedment s of a sheet-pile wall results a nonlinear function given asThe exact roots of (48) are , , and as shown in Figure 6.
The initial estimates for are taken as

Example 2. (fractional conversion, see [32]).
As the expression described in [14, 35],is the fractional conversion of nitrogen, with hydrogen feed at 250 atm. and 227 k. The exact roots of (50) are . Real roots of (50) are shown in Figure 7. The initial estimates for are taken as

Example 3. (see [33], for simultaneous determination of distinct and multiple roots).
Here, we consider another standard test function for the demonstration of convergence behavior of newly constructed methods.
Considerwith exact roots as shown in Figure 8. The initial guessed values have been taken asFor distinct roots, we use method (34), and for multiple roots, method (37):Figures 9(e)–9(g) show the residual fall of iterative methods (Q1–Q3 and E1–E9), Figures 9(a), 9(b), and 9(d) present the residual fall of simultaneous iterative methods (SM1–SM3 and PJ6) for nonlinear functions , , and when the multiplicity of roots is simple, and Figure 9(c) presents the residual fall for multiple roots for a nonlinear function , respectively.
Table 13 shows the values of parameters used in iterative methods Q1–Q3 for test Examples 13.

7. Conclusion

We have developed here three families of single-step single root finding methods of optimal convergence order four and three families of simultaneous methods of order six, respectively. From Tables 713 and Figures 15 and 8, we observe that our methods (Q1–Q3 and SM1–SM3) are superior in terms of efficiency, stability, CPU time, and residual error as compared to the methods E1–E9 and PJ6 method, respectively.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Authors’ Contributions

All authors contributed equally in the preparation of this manuscript.

Acknowledgments

This research work was supported by all the authors of this manuscript.