A Two-Parameter Family of Fourth-Order Iterative Methods With Optimal Convergence For Multiple Zeros
A Two-Parameter Family of Fourth-Order Iterative Methods With Optimal Convergence For Multiple Zeros
A Two-Parameter Family of Fourth-Order Iterative Methods With Optimal Convergence For Multiple Zeros
Abstract
In this paper, we develop a family of fourth-order iterative methods using the weighted
harmonic mean of two derivative functions to compute approximate multiple roots of
nonlinear equations. They are proved to be optimally convergent in the sense of Kung-
Traub’s optimal order. Numerical experiments for various test equations well confirm the
validity of convergence and asymptotic error constants for the developed methods.
Mathematics Subject Classification(2000): 65H05, 65H99, 41A25, 65B99.
keywords: multiple root, fourth-order method, error equation, asymptotic
error constant, two-parameter family
1. Introduction
A development of new iterative methods locating multiple roots for a given nonlinear equation de-
serves special attention on both theoretical and numerical interest, although prior knowledge about
the multiplicity of the sought zero is required[1]. Traub[19] discussed the theoretical importance of
multiple-root finders although the multiplicity is not known a priori by stating: ”Since the multiplicity
of a zero is often not known a priori, the results are of limited value as far as practical problems are
concerned. The study is, however, of considerable theoretical interest and leads to some surprising
results.” This motivates our analysis for multiple-root finders to be shown in this paper. In case the
multiplicity is not known, interested readers should refer to the methods suggested by Wu and Fu [21]
and Yun[22, 23].
Various iterative schemes finding multiple roots of a nonlinear equation with the known multiplicity
have been proposed and investigated by many researchers[2, 3, 9, 10, 14, 17, 24]. Johnson and Neta[6]
presented a fourth-order method extending Jarratt’s method. Neta[13] also developed a fourth-order
method requiring one-function and three-derivative evaluation per iteration grounded on a Murakami’s
method[11]. Li et al.[8] proposed the following fourth-order method which needs evaluations of one
function and two derivatives per iteration for x0 chosen in a neighborhood of the sought zero α of f (x)
with known multiplicity m ≥ 1:
1
A family of fourth-order iterative methods with optimal convergence for multiple zeros 2
2m f (xn ) 2 m(m−2)
where yn = xn − m+2 f ′ (xn ) , β = − m2 , φ = 1
2 (m/(m+2))m
m
and δ = −( m+2 )−m with the following
error equation
en+1 = K4 e4n + O(e5n ), n = 0, 1, 2, · · · , (2)
−2+2m+2m2 +m3
where K4 = 3m4 (m+2)3
θ13 − 1
θ θ
m(m+1)2 (m+2) 1 2
+ m
θ ,
(m+2)3 (m+1)(m+3) 3
en = xn − α and θj =
f (m+j) (α)
f or j = 1, 2, 3.
f (m) (α)
Based on Jarratt[5] scheme for simple roots, Janak et al. [15] developed the following fourth-order of
convergent scheme:
2 −1
f (xn ) f (xn ) f (xn ) f (xn )
xn+1 = xn − A ′ −B ′ −C , n = 0, 1, 2, · · · , (3)
f (xn ) f (yn ) f ′ (yn ) f ′ (xn )
where A = 1
8
m(m3 − 4m + 8), B = − 14 m(m − 1)(m + 2)2 ( m+2
m
)m , C = 1
8
m(m + 2)3 ( m+2
m
)2m and
2m f (xn )
yn = x n − m+2 f ′ (xn )
and derived the error equation below:
We now proceed to develop a new iterative method finding an approximate root α of a nonlinear
equation f (x) = 0, assuming the multiplicity of α is known. To do so, we first suppose that a function
f : C → C has a multiple root α with integer multiplicity m ≥ 1 and is analytic in a small neighborhood
of α. Then we propose a new iterative method free of second derivatives below with an initial guess
x0 sufficiently close to α:
f (xn ) F (yn ) f (xn )
xn+1 = yn − a −b ′ −c ′ , n = 0, 1, 2, · · · , (6)
f ′ (xn ) f (xn ) f (yn )
where
f (xn ) λf ′ (xn )f ′ (yn )
yn = x n − γ
′
, F (yn ) = f (xn ) + (yn − xn ) ′ ,
f (xn ) f (xn ) + ρf ′ (yn )
with a, b, c, γ, λ and ρ as parameters to be chosen for maximal order of convergence[12, 19]. One
should note that F (yn ) is obtained from Taylor expansion of f (yn ) about xn up to the first-order
terms with weighted harmonic mean[16] of f ′ (xn ) and f ′ (yn ).
Theorem 2.1 shows that proposed method(6) possesses 2 free parameters λ and ρ. A variety of
free parameters λ and ρ give us an advantage that iterative scheme (6) can develop various numerical
methods. One can often have a freedom to select best suited parameters λ and ρ for a sought zero
α. Several interesting choices of λ and ρ further motivate our current analysis. As seen in Table 1,
we consider five kinds of methods Y1, Y2, Y3, Y4 and Y5 and list selected parameters (λ, ρ) and the
corresponding values (a, b, c), respectively.
m+2
m( m+2 ) 2
If λ = − (m2 m
+2m+4)
and ρ = −( m+2
m
)m are selected, then we obtain a = 0, b = − m(m +2m+4)
2(m+2)
and c = 0, in which case iterative scheme (6) becomes method Y5 mentioned above and reduces to
iterative scheme (1) developed by Li et al.[8].
In this paper, we investigate the optimal convergence of the fourth-order methods for multiple-
root finders with known multiplicity in the sense of optimal order claimed by Kung-Traub[7] and
derive the error equation. We find that our proposed schemes require one evaluation of the function
and two evaluations of first derivative and satisfy the optimal order. In addition, through a variety
of numerical experiments we wish to confirm that the proposed methods well show the convergence
behavior predicted by the developed theory.
A family of fourth-order iterative methods with optimal convergence for multiple zeros 3
2. Convergence Analysis
In this section, we describe a choice of parameters a, b, c in terms of λ and ρ to get fourth-order
convergence for our proposed scheme (6).
Theorem 2.1 Let f : C → C have a zero α with integer multiplicity m ≥ 1 and be analytic in
(m+j)
m
a small neighborhood of α. Let κ = ( m+2 )m , θj = f f (m) (α)
(α)
f or j ∈ N. Let x0 be an initial
m 1 3
guess chosen in a sufficiently small neighborhood of α. Let a = m+2 {m − 8 m(m + 2) (1 + κρ) −
(m+2)2 (m(−1+2κλ)−(m+2)κρ)(m+(m+2)κρ)2 (m+2)(m+(m+2)κρ)3 1 3
16mκλ }, b = − 16κλ , c = 8 m(m + 2) κ(1 + κρ) and
2m
γ = m+2 . Let λ, ρ ∈ R be two free constant parameters. Then iterative method (6) is of order four
and defines a two-parameter family of iterative methods with the following error equation:
where en = xn − α and
24κρ
8+2m+6m2 +4m3 +m4 − m+(m+2)κρ
ψ4 = 3m4 (m+1)3 (m+2)
θ13 − 1
θ θ
m(m+1)2 (m+2) 1 2
+ m
θ .
(m+2)3 (m+1)(m+3) 3
Proof. Using Taylor’s series expansion about α, we have the following relations:
f (m) (α) m
f (xn) = en [1 + A1 en + A2 e2n + A3 e3n + A4 e4n + O(e5n )], (7)
m!
f (m) (α) m−1
f ′ (xn ) = en [1 + B1 en + B2 e2n + B3 e3n + B4 e4n + O(e5n )], (8)
(m − 1)!
m! (m−1)! f (m+k) (α)
where Ak = (m+k)! θk , Bk = (m+k−1)! θk and θk = f (m) (α)
f or k ∈ N.
Dividing (7) by (8), we obtain
f (xn) 1
= [en − K1e2n − K2 e3n + K3 e4n + O(e5n )], (9)
f ′ (xn ) m
f (xn )
yn = x n − γ = α + ten + K1 (1 − t)e2n + K2 (1 − t)e3n + K3(1 − t)e4n + O(e5n ). (10)
f ′ (xn )
Since f ′ (yn ) can be expressed from f ′ (xn ) in (8) with en substituted by (yn − α) from (10), we get:
With the aid of symbolic computation of Mathematica[20], we substitute (7)-(11) into proposed method
(6) to obtain the error equation as
f (xn ) F (yn ) f (xn )
en+1 = yn − α − a −b ′ −c ′ = ψ1 en + ψ2 e2n + ψ3 e3n + ψ4 e4n + O(e5n ), (12)
f ′ (xn ) f (xn ) f (yn )
A family of fourth-order iterative methods with optimal convergence for multiple zeros 4
1−m m−1
where ψ1 = t − a+b+ctm
− b(t−1)t
1+tm−1 ρ
λ
and the coefficient ψi (i = 2, 3, 4) may depend on parameters
t, a, b, c, λ and ρ.
Solving ψ1 = 0 and ψ2 = 0 for a and b, respectively, we get after simplifications:
bm(t − 1)tm−1 λ
a = −b + t(m − ct−m ) − , (13)
1 + tm−1 ρ
tm−1 m(P1 + tm P2 ρ)
c= . (17)
2(t − 1)2 (1 + m(t − 1) + t)3
m m
Substituting t = m+2 into (13),(14) and (17) with κ = ( m+2 )m , we can rearrange these expressions
to obtain
m(m + 2)3 (1 + κρ) (m + 2)2 (m(−1 + 2κλ) − (m + 2)κρ)(m + (m + 2)κρ)2
m
a= m− − ,
m+2 8 16mκλ
(18)
(m + 2)(m + (m + 2)κρ)3
b=− , (19)
16κλ
and
1
m(m + 2)3 κ(1 + κρ).
c= (20)
8
Calculating by the aid of symbolic computation of Mathematica[20], we arrive at the error equation
below:
en+1 = ψ4 e4n + O(e5n ), (21)
24κρ
8+2m+6m2 +4m3 +m4 − m+(m+2)κρ 3
where ψ4 = 3m4 (m+1)3 (m+2)
θ1 − m(m+1)12 (m+2) θ1 θ2 + (m+2)3 (m+1)(m+3)
m
θ3 with κ =
m m
( m+2 ) .
It is interesting to observe that error equation (21) has only one free parameter ρ, being independent
of λ. Table 1 shows typically chosen parameters λ and ρ and defines various methods Yk , (k = 1, 2, · · · 5)
derived from (6). Method Y5 results in the iterative scheme (1) that Li et al.[8] suggested.
digits and the inexact value of α is approximated to be accurate enough about up to 400 significant
digits (with the command F indRoot[f [x], {x, x0 }, P recisionGoal → 400, W orkingP recision → 600],
we list them up to 15 significant digits because of the limited space.
√
Table 2: Convergence behavior with f (x) = cos(πx2 /6) log(x2 − 3x + 1)
√
(m, λ, ρ) = (2, 5, 5), α = 3
en+1
n xn |f(xn )| | xn − α | | | η
en 4
0 1.58 0.0716114 0.152051 0.02152876768
1 1.73207355929649 1.62622 × 10−9 0.0000227517 0.04256566818
2 1.73205080756888 1.04639 × 10−40 5.77127 × 10−21 0.02153842658
3 1.73205080756888 1.79209 × 10−165 2.38839 × 10−83 0.02152876768
4 1.73205080756888 0.0 × 10−599 0.0 × 10−299
√
As a first example with a double zero α = 3 and an initial guess x0 = 1.58, we select a test
2 √
function f (x) = cos( πx6 ) log(x2 − 3x + 1). As a second experiment, we take another test function
f (x) = (16 + x2 )3 (log(x2 + 17))4 with a root α = −4i of multiplicity m = 7 and with an initial value
x0 = −3.94i.
Taking another test function f (x) = (1 − sin(x2 )) (log(2x2 − π + 1))4 with a root α = − π2 of
p
en+1
n xn |f(xn )| | xn − α | | en 4
| η
0 −1.18 0.000601837 0.0733141 0.3470127318
1 −1.25334379618481 1.35039 × 10−24 0.0000296589 1.026605711
2 −1.25331413731550 7.41245 × 10−109 2.68363 × 10−19 0.3468196331
3 −1.25331413731550 6.74576 × 10−446 1.79984 × 10−75 0.3470127318
4 −1.25331413731550 4.62631 × 10−1794 3.64139 × 10−300
and the asymptotic error constant are clearly shown in Tables 2-4 reaching a good agreement with the
theory developed in Section 2.
The additional test functions f1 , f2 , · · · f7 listed below further confirm the convergence behavior of
our proposed method (6).
x7 −x2 −7
f1 (x) = x5 +sin x
, α = 1.3657, m = 1, x0 = 1.32
x5 −x2 −7
f2 (x) = (e − 1)(7 + x2 − x5√ ), α = −1.16 − 0.95i, m = 2, x0 = −1.14 − 0.92i
f3 (x) = (x6 − 8)2 log(x6 − 7), α = 2, m √ = 3, x0 = 1.39
2 4 2 1− 11i
f4 (x) = (3 − x + x ) cot (x + 1), α = 2
, m = 4, x0 = 0.47 − 1.79i
f5 (x) = (x4 − 9x3 + 4x2 − 33x − 27)(log(x − 8))3 , α = 9, m = 5, x0 = 8.79
f6 (x) = (log(1 − π + x))6 , α = π, m = 6, x0 = 3.09
2 2
f7 (x) = (e3x+x − 1) cos3 ( πx
18
)(log(x3 − x2 + 37))3 , α = −3, m = 7, x0 = −2.88
Table 5 shows the convergence behavior of |xn −α| among methods L, J, Y1, Y2, Y3 and Y4, where
L denotes the method proposed by Li et al.[8], J the method proposed by Janak et al. [15], the methods
Y1 to Y4 are described in Table 1. It is certain that proposed method (6) needs one evaluation of the
function f and two evaluations of the first derivative f ′ per iteration. Consequently, the corresponding
efficiency index[19] is found to be 41/3 ≈ 1.587, which is optimally consistent with the conjecture of
Kung-Traub[7]. For the particularly chosen test functions in these numerical experiments, methods
Y1 to Y4 have shown better accuracy than methods L and J.
Nevertheless, the favorable performance of proposed scheme (6) is not always expected since no
iterative method always shows best accuracy for all the test functions. If we look at the asymptotic error
x −α
equation η = η(f, α, p) = limn→∞ | (xn+1n −α)
p | closely, we should note that the computational accuracy is
sensitively dependent on the structures of the iterative methods, the sought zeros, convergence orders
and the test functions as well as good initial values.
It is important to properly choose enough number of precision digits. If ek is small, epk gets much
smaller as k increases. If the number of precision is small and error bound ǫ is not small enough, the
term epk causes a great loss of significant digits due to magnified round-off errors. This hinders us from
verifying p and η more accurately.
Bold-face numbers in Table 5 refer to the least error until the prescribed error bound is met. This
paper has confirmed optimal fourth-order convergence proved the correct error equation for proposed
iterative methods (6), using the weighted harmonic mean of two derivatives to find approximate
multiple zeros of nonlinear equations. We remark that the error equation of (6) contains only one free
parameter ρ, being independent of λ.
We still further need to discuss some aspects of root-finding for ill-conditioned problems as well as
sensitive dependence of zeros on initial values for iterative methods. As is well known, a high-degree
(say, degree higher than 20, taking the multiplicity of a zero into account) polynomial is very likely to
be ill-conditioned. In this case, small changes in the coefficients can greatly alter the zeros. The small
changes can occur as a result of rounding process in computing coefficients. Minimal round-off errors
may improve the root-finding of ill-conditioned problems. Certainly multi-precision arithmetic should
be used in conjunction with optimized algorithms reducing round-off errors. High-order methods with
A family of fourth-order iterative methods with optimal convergence for multiple zeros 7
f(x) x0 | xn −α | L J Y1 Y2 Y3 Y4
f1 1.32 | x1 −α | 5.37e − 7† 4.00e − 7 6.26e − 8 5.37e − 7 1.96e − 7 2.14e − 6
| x2 −α | 1.20e − 26 1.27e − 27 7.31e − 31 1.2e − 26 9.24e − 30 9.29e − 24
| x3 −α | 3.02e − 105 1.27e − 109 1.36e − 122 3.02e − 104 4.53e − 119 3.24e − 93
| x4 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
f2 −1.14 − 0.92i | x1 −α | 0.001168 0.00144 0.00124 0.00097 0.00136 0.0018
| x2 −α | 4.99e − 10 1.60e − 9 7.38e − 10 1.45e − 10 1.19e − 9 5.18e − 9
| x3 −α | 1.65e − 35 2.46e − 33 8.97e − 35 7.24e − 38 7.15e − 34 3.46e − 31
| x4 −α | 2.02e − 137 1.37e − 128 1.95e − 134 4.44e − 147 9.06e − 131 6.89e − 120
| x5 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
f3 1.39 | x1 −α | 0.000428741 0.0002722 0.000399 0.00492273 0.0003078 0.000191074
| x2 −α | 1.77e − 12 3.38e − 13 1.37e − 12 5.14e − 8 5.32e − 13 9.02e − 14
| x3 −α | 5.19e − 46 8.05e − 49 1.89e − 46 1.19e − 28 4.73e − 48 4.48e − 51
| x4 −α | 3.79e − 180 2.57e-191 6.92e − 182 3.46e − 111 2.96e − 188 2.73e − 200
| x5 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
f4 0.47 − 1.79i | x1 −α | 0.00016 0.000159 0.00016 0.000153 0.0001602 0.000159
| x2 −α | 3.55e − 16 3.39e − 16 3.55e − 16 2.81e − 16 3.43e − 16 3.32e − 16
| x3 −α | 8.36e − 63 6.94e − 63 8.42e − 63 3.14e − 63 7.28e − 63 6.33e − 63
| x4 −α | 2.56e − 249 1.21e − 249 2.63e − 249 4.89e − 249 1.46e − 249 8.37e − 250
| x5 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
f5 8.79 | x1 −α | 0.0000222 0.0000184 0.000023 0.0000118 0.0000193 0.0000164
| x2 −α | 1.43e − 21 5.53e − 22 1.90e − 21 5.51e − 23 7.12e − 22 3.06e − 22
| x3 −α | 2.47e − 86 4.46e − 88 8.07e − 86 2.60e − 92 1.29e − 87 3.71e − 89
| x4 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
f6 3.09 | x1 −α | 2.84e − 7 3.45e − 7 2.36e − 7 4.14e − 7 3.30e − 7 3.65e − 7
| x2 −α | 2.51e − 28 6.55e − 28 1.00e − 28 1.62e − 27 5.28e − 28 8.68e − 28
| x3 −α | 1.52e − 112 8.50e − 111 3.24e − 114 3.86e − 109 3.44e − 111 2.76e − 110
| x4 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
f7 −2.88 | x1 −α | 0.00109 0.001241 0.00088 0.00137 0.0012 0.00128
| x2 −α | 2.04e − 11 1.97e − 11 3.14e − 11 1.00e − 10 6.32e − 12 4.04e − 11
| x3 −α | 1.77e − 42 1.67e − 42 4.48e − 41 2.90e − 39 9.48e − 45 4.52e − 41
| x4 −α | 1.01e − 166 8.05e − 167 1.84e − 160 2.05e − 153 4.77e − 176 7.12e − 161
| x5 −α | 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299 0.e − 299
the asymptotic error constant of small magnitude are preferred for locating zeros with relatively good
accuracy. Locating zeros for ill-conditioned problems is generally believed to be a difficult task.
It is also important to properly choose close initial values near the root for guaranteed convergence
of the proposed method. Indeed, initial values are chaotic[4] to the convergence of the root α. The
following statement quoted from[18] is not too much to emphasize the importance of selected initial
values: ”A point that belongs to the non-convergent region for a particular value of the parameter can
be in the convergent region for another parameter value, even though the former might have a higher
order of convergence than the second. This then indicates that showing whether a method is better
than the other should not be done through solving a function from a randomly chosen initial point
and comparing the number of iterations needed to converge to a root.”
Since our current analysis aims on the convergence of the proposed method, initial values are
selected in a small neighborhood of α for guaranteed convergence. Thus the chaotic behavior of x0 on
the convergence should be separately treated under the different subject in future analysis. On the
one hand, future research may be more strengthened with the graphical analysis on the convergence
including chaotic fractal basins of attractions. On the other hand, rational approximations[1] provide
rich resources of future research on developing new high-order optimal methods for multiple zeros.
References
[1] Iliev A., Kyurkchiev N., Nontrivial Methods in Numerical Analysis (Selected Topics in Numerical
Analysis), Lambert Academic Publishing, Saarbrucken, 2010.
[2] L. Atanassova, N. Kjurkchiev, A. Andreev, Two-sided multipoint methods of high order for solution
of nonlinear equations, Numerical Methods and Applications, Proc. of the international conference
on numerical methods and applications, Sofia, August 22-27, 1989, 33-37.
[3] C. Dong, A family of multipoint iterative functions for finding multiple roots of equations, Int. J.
Comput. Math., 21, 1987, 363-367.
[4] D. Gulick, Encounters with Chaos, Prentice-Hall, Inc., 1992.
A family of fourth-order iterative methods with optimal convergence for multiple zeros 8
[5] P. Jarratt, Some efficient fourth order multipoint methods for solving equations, BIT, 9, 1969,
119-124.
[6] Anthony N. Johnson, B.Neta, High-order nonlinear solver for multiple roots, Comp. Math. Appl.,
55, 2008, 2012-2017.
[7] H.T. Kung, J.F. Traub, Optimal order of one-point and multipoint iteration, J. Assoc. Comput.
Mach., 21, 1974, 643-651.
[8] S. Li, X. Liao, L. Cheng, A new fourth-order iterative method for finding multiple roots of nonlinear
equations, Appl. Math. Comput., 215, 2009, 1288-1292.
[9] S. Li, L. Cheng, B. Neta, Some fourth-order nonlinear solvers with closed formulae for multiple
roots, Comp. Math. Appl., 59 (1), 2010, 126-135.
[10] X. Li, C. Mu, J. Ma, L. Hou, Fifth-order iterative methods for finding multiple roots of nonlinear
equations, Numer. Algor., 57 (3), 2011, 389-398.
[11] T. Murakami, Some fifth order multipoint iterative formulae for solving equations, J. Inform.
Process., 1, 1978, 138-139.
[12] B.Neta, Numerical Methods for the Solution of Equations, Net-A-Sof, California, 1983.
[13] B.Neta, Extension of Murakami’s high order nonlinear solver to multiple roots, Int. J. Comput.
Math., 87, 2010, 1023-1031.
[14] M. Petkovic, L. Petkovic, J. Dzunic, Accelerating generators of iterative methods for finding
multiple roots of nonlinear equations, Comp. Math. Appl., 59 (8), 2010, 2784-2793.
[15] J. Sharma, R. Sharma, Modified Jarratt method for computing multiple roots, Appl. Math.
Comput., 217, 2010, 878-881.
[16] R. Sharma, Some more inequalities for arithmetic mean, harmonic mean and variance, J. Math.
Inequal. 2 (1), 2008, 109-114.
[17] J. Sharma, R. Sharma, Modified Chebyshev-Halley type method and its variants for computing
multiple roots, Numer. Algor., 61(4), 2012, 567-578.
[18] Susanto and N. Karjanto, Newtons methods basins of attraction revisited, Appl. Math. Comput.,
215, 2009, 1084-1090.
[19] J.F.Traub, Iterative Methods for the Solution of Equations, Prentice Hall, New Jersey, 1964.
[20] Stephen Wolfram, The Mathematica Book, 5th ed., Wolfram Media, 2003.
[21] X. Y. Wu, D. S. Fu, New higher-order convergence iteration methods without employing derivatives
for solving nonlinear equations, Comp. Math. Appl. 41, 2001, 489-495.
[22] B. I. Yun, Iterative methods for solving nonlinear equations with finitely many roots in an interval,
J. Comp. Appl. Math., 236 (13), 2012, 3308-3318.
[23] B. I. Yun, A derivative free iterative method for finding multiple roots of nonlinear equations,
Appl. Math. Lett., 22, 2009, 1859-1863.
[24] X. Zhou, X. Chen, Y. Song, Constructing higher-order methods for obtaining the multiple roots
of nonlinear equations, J. Comp. Appl. Math., 235 (14), 2011, 4199-4206.