Journal of Computational and Applied Mathematics: H.Z. Hassan, A.A. Mohamad, G.E. Atteia
Journal of Computational and Applied Mathematics: H.Z. Hassan, A.A. Mohamad, G.E. Atteia
Journal of Computational and Applied Mathematics: H.Z. Hassan, A.A. Mohamad, G.E. Atteia
1. Introduction
Numerical differentiation is an elementary and essential part in scientific modelling and numerical analysis. It is
extensively used when we need to calculate the changing rate of data samples or discrete values that have unknown form
of their original generating function. This is the case when solving the ordinary or partial differential equations numerically.
Many methods have been introduced and discussed to determine the derivatives numerically. These methods can be
classified into two approaches [1]. The first approach aims to develop formulas for calculating the derivatives numerically.
This includes the Taylor expansion based finite difference method [2–6], the operator method [2,7], the interpolating
polynomial method (e.g., Lagrangian, Newton–Gregory, Gauss, Bessel, Stirling, Hermite interpolating polynomials, etc.)
[3–5,7], and the lozenge diagram method [8]. The operator, the polynomial interpolation, and the lozenge diagram
methods are implicit and are generally based on difference tables constructed from the data sample. Furthermore, they
are computationally expensive complicated methods due to the large number of storage memory places required to retain
∗
Corresponding author.
E-mail addresses: [email protected], [email protected] (H.Z. Hassan), [email protected] (A.A. Mohamad), [email protected]
(G.E. Atteia).
0377-0427/$ – see front matter © 2011 Elsevier B.V. All rights reserved.
doi:10.1016/j.cam.2011.12.019
H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631 2623
their difference tables. Moreover, these numerical differentiation formulas are in fact forms equivalent to the Taylor series
based approximation [9]. The second approach does not give an explicit formula for the derivative; it just aims to evaluate
it by using the function data. The automatic differentiation method [10–12], the regularization method [13–15], and the
Richardson extrapolation method [4,16] fall in this category.
The Taylor series based finite difference approximation is used to numerically evaluate the derivative of a function at
a grid reference point by using the data samples at the other neighbouring points within the domain. Depending on the
degree of differentiation and order of accuracy required, a number of linear equations are generated using the undetermined
coefficients method. These equations are to be solved to calculate the weighting coefficients for the differencing formula at
the reference point. The procedure becomes more difficult as the degree of the derivative and accuracy required are increased
due to the increase in the number of linear equations to be solved. Furthermore, the process should be repeated for each
mesh point within the domain. In addition, if the required accuracy is changed for the same derivative, a new system of
equations should be derived and solved for a new set of coefficients. Therefore, the implementation of the Taylor series
based finite difference approximation is limited to lower degrees and orders. However, there exist some documented sets
of pre-calculated finite difference weighting coefficients tabulated for high order and high degree derivatives; see [17]. In
this reference, the formula introduced is restricted by the degree and order and the given table limits. Although Bickley [17]
referred in his paper to the repeated application of the formula when higher degrees are required, this will be at the expense
of the order of accuracy. Another attempt to tabulate weights for many degrees of derivatives and approximated to high
order of accuracy was the focus of work done in [18] which was more extensive than the work done in [17].
The direct use of the finite difference method is computationally very expensive when higher degree derivatives
with lesser errors are required. However, this method becomes more attractive if a closed explicit algebraic form of the
coefficients is found. Gregory [19] has used the undetermined coefficients method to get the coefficients in his formula
presented for calculating the derivatives. The method involved the solution of systems of linear equations, or equivalently,
the inversion of a certain Vandermonde matrix. This work was extended in [20] to obtain explicit representations
for the same coefficients in terms of Stirling numbers which are extensively tabulated. On the basis of Taylor series,
Khan and Ohba [6,9,21,22] have presented the explicit forward, backward and central difference formulas for finite
difference approximations with arbitrary orders for the first derivative, and the central difference approximations for higher
derivatives. However, these explicit forward, backward and central difference formulas do not give the whole circumstances.
The explicit approximation formulas for derivatives near the ends of an interval also need to be developed [1].
Previously mentioned techniques that were developed for finding explicit formula expressions for the derivatives of grid
based functions have been of considerable complexity. Although these algorithms are correctly proven and tested, in real
calculations they fail to provide accurate values for the weighting coefficients when more higher approximation orders are
applied. The reason behind this is the resulting ill-conditioned system of linear equations with a large size Vandermonde
matrix when solving for the coefficients. Therefore, it is hard to directly get the matrix inverse or even use its determinant in
finding the coefficients. That is due to the fact that the determinant of the Vandermonde matrix goes to infinity as the matrix
size is increased. One other significant remark regarding the previously published coefficient tables or closed formulas is
related to the accuracy of the approximation. The accuracies of the forward and backward difference approximations are
obviously less than those of the other points. This is because forward and backward formulas use data only on one side of a
reference point and the formulas for the other points use data on both sides of reference points. In some special cases when
an even degree of the derivative with an odd order of approximation accuracy are required, the point at the grid middle will
be of higher order of accuracy than other points within the stencil. This issue may concern some researchers who aim to get
a unified approximation order across the whole solution domain.
The present paper aims to introduce a dependable explicit formula that can be used to calculate the numerical
approximations of arbitrary degree and order of derivative. The algorithm does not use the Vandermonde matrix or its
determinant in finding the weighting coefficients. Furthermore, the explicit formula imposes a lesser calculation burden,
and needs less computing time and storage, for estimating the derivatives than the other methods stated above. Moreover,
the easy and user-friendly computer code is also handy for helping other researchers to use it directly in their calculations.
The code is recommended for use by any researcher who is concerned with derivative approximation, especially in solving
ODEs and PDEs. That is because it is just a programming function that can be called with input of the sampling data, the
degree, and the order, and returns the corresponding derivatives at all the grid points to the main program. Finally, the
proposed method gives a unified order of accuracy for the derivative at all the grid nodes.
2. Notation
Suppose that x1 , x2 , . . . , xn are n different real numbers and x1 < x2 < · · · < xn . Let f to be a continuous differentiable
function in the interval [x1 , xn ] and its values be given at these numbers. Then, for any one xi the function value will be
2624 H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631
denoted as fi . Moreover, for an equal tabular interval for the function f , the sampling period will be denoted by h where
h = xi+1 − xi and xi = xl + (i − l)h (for h ≥ 0 and i, l will be denoted as an integer everywhere). Fig. 1 shows the
numerical stencil which will be used in the algorithm derivation; the grid point i is considered the reference or base node.
The derivative of the function f defined at the reference node i and at an arbitrary derivation degree m and an arbitrary order
of accuracy O will be denoted by fi (m, O), where the required differentiation degree and order of accuracy are constrained
by the relation n ≥ m + O.
By using the undetermined coefficients method, the mth-degree derivative at the base point i can be approximated up
to the order of accuracy O by using a stencil with n = m + O points as follows:
n
fi (m, O) = Ci,l (m, O) · fl , i = 1, 2, 3, . . . , n (1)
l=0
where the n unknown factors, Ci,l (m, O), are the weighting coefficients for the derivative. The values of the function f at all
other nodes l ̸= i can be expressed as Taylor series in terms of the reference point i as given in Eq. (2):
∞
(l − i)k (k)
fl = hk f i
k=0
k!
l =1
1! l =1
2! l =1
..
.
h(n−1)
l =n
(n−1) (n−1)
+ fi Ci,l (m, O) · (l − i) + Ei (m, O). (4)
(n − 1)! l =1
The term Ei (m, O) in the previous equation represents the remainder or error term in the approximation of fi (m, O). Eq. (4)
can be rewritten in a more compact form as follows:
h(g −1)
g =n l =n
(g −1)
Ci,l (m, O) · (l − i)(g −1)
fi (m, O) = fi + Ei (m, O). (5)
g =1
(g − 1)! l=1
Eq. (5) represents the exact value of the mth derivative of the function at i. An approximated value up to the order of accuracy
of O can be obtained by neglecting the remainder term Ei (m, O). Therefore, we can write
h(g −1)
g =n l =n
(g −1)
Ci,l (m, O) · (l − i)(g −1) .
fi (m, O) ≈ fi (6)
g =1
(g − 1)! l=1
We equate the coefficient of any degree derivative from the right hand side to the corresponding coefficient of the same
degree derivative from the left hand side. Hence, a system of n linear equations in the unknown weighting coefficients can
be represented by the following equation:
(g − 1)! l =n
Ci,l (m, O) · (l − i)(g −1) ,
bg = g = 1, 2, 3, . . . , n (7)
h(g −1) l =1
H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631 2625
where
0 if g ̸= m + 1
bg =
1 if g = m + 1.
Therefore,
0 g = 1, 2, 3, . . . , n and g ̸= m + 1
l =n
Ci,l (m, O) · (l − i)(g −1) =
m! (8)
g = m + 1.
l =1 hm
This equation can be represented in matrix form as follows:
Ai (m,O) Ci (m,O) bi (m,O)
( 1 − i) (2 − i) (l − i) ( n − i)
··· ··· Ci,1 (m, O) b1
(1 − i) (2 − i)1 (l − i)1 (n − i)1
1
(1 − i)2
··· ··· Ci,2 (m, O) b2
(2 − i)2 ··· (l − i)2 ··· (n − i)2 Ci,3 (m, O) b3
.. .. .. .. .. .. .. m! ..
. . . . . . . = hm . .
(9)
(1 − i)(g −1) (2 − i)(g −1) (l − i)(g −1) (n − i) (g −1) Ci,g (m, O)
··· ··· bg
.. ..
.. .. .. .. .. ..
. . . . . . . .
Therefore,
m!
Ai (m, O)Ci (m, O) = bi (m, O).
hm
Solving for Ci (m, O), then
m! −1
Ci (m, O) = A (m, O)bi (m, O). (10)
hm i
It is noted from Eq. (9) that the matrix Ai (m, O) is the transpose of the Vandermonde matrix Vi (m, O). Therefore, Eq. (10)
can be written in the following form:
m! T −1 m! 1 T
Ci (m, O) = Vi (m, O) bi (m, O) = m V−i (m, O) bi (m, O). (11)
hm h
In order to get an explicit formulation for the coefficients Ci (m, O), the Vandermonde matrix inverse V−
i (m, O) should be
1
expressed in a closed form. El-Mikkawy [23] has introduced an explicit closed form expression for the inverse matrix of the
generalized Vandermonde matrix by using the elementary symmetric functions. After correcting this expression for some
minor mistakes, the Vandermonde matrix inverse is given by
σn(−
n)
l+1,r
i (m, O) = mi (m, O) = ml,r ,
V− ml,r = (−1)n−l p=n
1
(12)
(υr − υp )
p=1
p̸=r
(n)
where the elementary symmetric functions σi,j are defined as
ri−1 =n
r =n r =n r3 =n
1 2 h=i−1
υrh
··· i ̸= 1
σi(,nj ) = r1 =1 r2 =r1 +1 r3 =r2 +1 ri−1 =ri−2 +1 h=1 (13)
r1 ̸=j r2 ̸=j r3 ̸=j ri−1 ̸=j
i = 1.
1
The n distinct parameters υ1 , υ2 , υ3 , . . . , υn , in Eq. (13) are defined at the reference node i by the following:
υ k = k − i, k = 1, 2, 3, . . . , n.
The transpose of the Vandermonde inverse matrix is therefore given by
T σn(−
n)
r +1,l
i (m, O)
V− = ωi (m, O) = ωl,r , ωl,r = (−1)n−r p=n .
1
(14)
(υl − υp )
p=1
p̸=l
2626 H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631
m! m!
Ci (m, O) = ωi (m, O)bi (m, O) = ωl,m+1 .
(15)
hm hm
Therefore,
m! m! σn(−
n)
m ,l
Ci,l (m, O) = ω
m l,m+1
= (−1)n−m−1 p=n . (16)
h hm
(υl − υp )
p=1
p̸=l
Now, the multiple-product term in the denominator of the right hand side of Eq. (16) can be expanded and simplified as
follows:
p=n
p=n
p=n
(υl − υp ) = [(l − i) − (p − i)] = (l − p)
p=1 p=1 p=1
p̸=l p̸=l p̸=l
Substituting into Eq. (16) we get the final explicit closed form for the weighting coefficients in Eq. (1):
m!
Ci,l (m, O) = (−1)l−m−1 σn(−
n)
m,l . (18)
hm (l − 1)!(n − l)!
The remainder term Ei (m, O) in Eq. (4) can be expressed as follows:
∞ s n
(s) h
Ei (m, O) = fi Ci,l (m, O)(l − i)s
s=n
s ! l =1
∞
(s) m!h
s −m n
(−1)l−m−1 (l − i)s σn(−
n)
m,l
= fi . (19)
s=n
s! l =1
(l − 1)!(n − l)!
In this section, we will be discussing the Matlab computer program developed to numerically calculate the arbitrary
degree derivative of any continuous function f , approximated with any arbitrary order of accuracy. The code is shown as
(n)
Listing 1. The Matlab function Sig(v, n), line 94 in the listing, calculates the matrix σi,j as given by Eq. (13). Due to symmetry,
(n)
the elements of the jth column of σi,j may be obtained by using
(n)
The notation υj → υ1 in Eq. (20) means that, for specific i and j, σi,j may be obtained from the algebraic expression for
σi(,n1)
by replacing each υj by υ1 in the expression for σi(,n1)
(see El-Mikkawy [23]). Then, this function is called by another
Matlab function called Coeff(m, O, h), line 80 in the code, which calculates the weighting coefficients matrix for all the
pre-specified stencil n = m + O nodes as given by Eq. (18). Now, the weighting coefficients are available for the third
function f_derivative(m, O, f, h), line 22, for performing the remaining calculations for the required differentiation across
the whole domain with N nodes and the specified degree m, order O and data step size h. First, if the approximation order is
different, this function adjusts the numerical stencil for a unified order of accuracy. This situation usually arises when m is
even and O is an odd number. In other words, we have an odd number n of stencil nodes and the middle node in the stencil is
approximated with a central difference approximation. Therefore, it attains a one higher order of accuracy than other nodes
in the stencil. In order to obtain a unified approximation order over all the stencil nodes, another node is added to the stencil
to eliminate the central difference approximation as shown in the code, lines 31–48. The objective of the remaining code
lines 52–78 is to generate the derivative over the real domain of N nodes by using the pre-calculated weighting coefficients
over the numerical stencil. Finally, the calculated differentiation column vector is passed to the main program.
H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631 2627
78 M ( i , j : j+np−1) = k ;
79 end
80 end
81 end
82
83 coef = [ U ; M ; L ] ;
84 Diff_fun = coef*f ;
85 % =============================================================
86 f u n c t i o n C = Coeff ( m , O , h )
87 % Weighting c o e f f i c i e n t s c a l c u l a t i o n
88
89 n = m + O;
90 k = [1:n ] ;
91 k = k(:) ;
92 f o r i =1: n
93 v = k − i;
94 sigma = Sig ( v , n ) ;
95 f o r l =1: n
96 C ( i , l ) = (( − 1) ^( l−m−1)*factorial ( m ) / ( factorial ( l−1)*factorial ( n−l ) * h^m ) ) * sigma ( n−m , l ) ;
97 end
98 end
99 % =============================================================
100 f u n c t i o n S = Sig ( v , n )
101 S = ones ( n , n ) ;
102 s = S;
103 f o r k = 1: n
104 vv = v ;
105 vv ( k ) = v ( 1 ) ;
106 f o r i =2: n
107 s ( i , i ) =s ( i−1,i−1)*vv ( i ) ;
108 i f i >2
109 f o r j=i−1:−1:2
110 s ( j , i ) =s ( j−1,i−1)*vv ( i ) +s ( j , i−1) ;
111 end
112 end
113 end
114 S ( : , k) = s ( : , n) ;
115 end
116 return
In Section 3, we have presented the mathematical derivation of our new method introduced for the numerical calculation
of the derivatives with any desired order of accuracy and any required degree. On the basis of this method, the derivative
fi (m, O) is constructed through the undetermined coefficient method as expressed by Eq. (1) with the weighting coefficients
(n)
Ci,l (m, O) given by the formula in Eq. (18) in terms of the elementary symmetric functions σi,j defined by Eq. (13). Generally,
the algebraic derivation of the formula provided is more straightforward and transparent as well as being very simple and
without any mathematical complications. This makes the problem under consideration easy and clear for other researchers
who are novices and only need to apply the method in their researches, as well as those who are expert. The computer code
is handy and is provided in the form of a programming function which can simply be called and executed within the main
application by using the required derivative degree m, the order of accuracy needed O, the function values f , and the regular
spacing size h as inputs. Consequently, it is very suitable for being implemented by others, especially when solving ordinary
or partial differential equations, because there is no need to construct finite difference schemes that require more effort, in
particular when high orders of accuracy and high differentiation degrees are needed.
It is evident that the method presented is explicit and the proven formula is given in a closed form. As a consequence,
it is computationally less expensive compared with those of other implicit methods such as the operator method [2,7],
the interpolating polynomial method [3–5,7], and the lozenge diagram method [8]. Moreover, it is not restricted by the
degree and order unlike the tabulated data of [17,18]. From the comparison between our method and the method discussed
in [19], we can say that our theory is superior and advanced. That is because the latter method requires the solution of a
linear system of equations to determine the coefficients. This imposes more computational burden as well as leading to large
numerical errors and less accuracy of the calculated values of the coefficients, especially in the case of high degree derivatives
and high orders of accuracy. Although Spitzbart and Macon [20] have eliminated the problems discussed in relation to the
study of Gregory [19], their proposed algorithm has a disadvantage compared with our method presented here. Their model
developed for calculating the coefficients constitutes three summation formulas having to be evaluated in sequence. These
formulas require evaluation of the Stirling numbers of the first kind, a process that involves the construction of tables. As a
result, the Spitzbart and Macon method requires more memory and is computationally expensive.
In fact, the algorithm presented in this paper has an extensive and comprehensive expression, and is not restricted
by either the derivative degree or the finite difference scheme type, unlike the studies done in [6,9,21,22], which have
more limitations. For example, Khan and Ohba [21] have presented the explicit forward, backward and central difference
H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631 2629
formulas for finite difference approximations with arbitrary orders but they apply only for the first-degree and second-
degree derivatives, except for a formula for any order and higher degree which is limited to the central finite difference
approximation type and therefore is not applicable at points near the stencil right and left ends. That is because the central
finite difference scheme uses the function values from both sides of the base point. Moreover, the formula in [9] applies
for the first-degree derivative only. In another work [6], the restriction was related to the nodes near the ends because the
formula was based on the central finite difference approximation.
In his study, Li [1] has introduced general explicit finite difference formulas with arbitrary order of accuracy for
approximating the first-degree and higher derivatives. His technique was named later as Li’s method (see Hickernell and
Yang [24]) or Li’s theorem (see Bai et al. [25]). Since our new formula for the weighing coefficients has a similar construction
to the formulas given in [1], it is important to clarify the following general major differences:
1. In Li’s method, the author has based the mathematical proof of the weighting coefficients formulas on solving the linear
algebraic system of equations by using Cramer’s rule. Through the implementation of the generalized Vandermonde
determinant and the use of the basic properties of that determinant, Li has obtained the formulas for the weighting
coefficients. However, in our methodology presented here and as seen in Section 3, we have solved the linear algebraic
system of equations through the matrix inverse method. This is done by implementation of the explicit closed formula
for the inverse of the Vandermonde matrix given in [23].
2. The formulas for weighting coefficients in Li’s method are given in terms of the root coefficient, as named by Li in his
code [26], which is given by a set of different expressions in correspondence with the derivative degree required to
be calculated; see Eq. (2-1) in [1]. In comparison, our formula for the weighting coefficients, Eq. (18), is proven to be
dependent on the elementary symmetric function which is defined by Eq. (13) and is different from the root coefficient
in Li’s method.
It should be mentioned that Li’s theorem has the advantage of applicability to equally and to unequally spaced data,
as compared with our method presented here which applies only for equally spaced data. However, we stress that our
new method is far better than Li’s method when equally spaced data are under consideration. The reasons behind that are
discussed in the following points:
1. Although the algorithm developed in [1] can be used to evaluate the derivatives numerically with any arbitrary order
of accuracy, it is restricted by the required derivative degree. In other words, it can only be used with some predefined
derivative degrees and not with any arbitrary degree. The reason is that Li did not express the root coefficient in his
formulas in terms of a single closed form function. Instead, he defined this coefficient in terms of multi-expressions
corresponding to the derivative degree; see Eq. (2-1) in [1]. As a consequence, Li’s method is not a closed form method.
If the required derivative degree is changed, the corresponding expression for the root coefficient must be changed as
well. Therefore, this makes the method less convenient. Additionally, more efforts are exerted in writing a programming
code for Li’s method. That is because, for each required degree derivative approximation, the corresponding expression
for the root coefficient has to be added to the code. Consequently, the programming code required to evaluate up to the
eighth-degree differentiation and based on Li’s method will be very long and becomes even longer if evaluations of other
degrees are required; see [26]. However, the method introduced in our present study is superior and can be adequately
employed to evaluate the derivative with any arbitrary order and also any arbitrary degree. That is because the formula
presented is in closed form as regards the order as well as the degree. Roughly speaking, our algorithm presented here
is much more convenient, more dependable, and simpler than Li’s theorem.
2. It is well known that the computational burden is increased by increasing the running and execution times, the amount
of storage memory used, and the size of the written programming code. These factors are considered judging criteria for
using one method rather than another. Therefore, we can deduce that Li’s model is computationally expensive compared
with our model presented in this study. This can be shown by comparing the piece of our code required to calculate
σi(,nj ) for any arbitrary degree (see lines 94–110 in Listing 1) with that required to calculate the root coefficients of Li’s
method up to only the eighth-degree differentiation (see [26]). Moreover, the necessity of using a large number of nested
loopings when evaluating the root coefficients in Li’s formulas results in a longer running time. In the case of high degree
derivatives, this number of nested looping blocks and the associated code execution time become large. For example, to
approximate the mth-degree derivative of some data by using Li’s technique, we need a total of (m + 1) nested looping
structures in the programming code; see [26].
In order to prove the validity of our algorithm and computer code, the calculated weighting coefficients obtained
when using our method are compared with the coefficients derived in [17]. According to Bickley’s formula, his tabulated
coefficients are scaled by a factor of λ = hm (n − 1)!/m!. Therefore, we scale our calculated weighting coefficients using the
same scale, λ, in order to make a comparison with his results, as shown in the listing code lines 26–28. Our calculated scaled
coefficients based on a fourth-degree derivative with an approximation of the fifth order of accuracy, m = 4, O = 5, as an
example, are given in Table 1. These coefficients are the same coefficients as were tabulated in [17] and this result verifies
the validity of the algorithm and computer program presented. To examine our technique for finding the discrete function
2630 H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631
Table 1
Scaled weighting coefficients, C̃i,l = λ ∗ Ci,l .
Node i C̃i,1 C̃i,2 C̃i,3 C̃i,4 C̃i,5 C̃i,6 C̃i,7 C̃i,8 C̃i,9
1 22 449 −147 392 428 092 −720 384 769 510 −534 464 235 452 −60 032 6 769
2 6 769 −38 472 96 292 −140 504 132 510 − 83 384 34 132 −8 232 889
3 889 −1 232 −6 468 21 616 −28 490 20 496 −8 708 2 128 − 231
4 − 231 2 968 −9 548 12 936 −7 490 616 1 092 − 392 49
5 49 − 672 4 732 −13 664 19 110 −13 664 4 732 − 672 49
6 49 − 392 1 092 616 −7 490 12 936 −9 548 2 968 − 231
7 − 231 2 128 −8 708 20 496 −28 490 21 616 −6 468 −1 232 889
8 889 −8 232 34 132 −83 384 132 510 −140 504 96 292 −38 472 6 769
9 6 769 −60 032 235 452 −534 464 769 510 −720 384 428 092 −147 392 22 449
Table 2
Comparison between the calculated and the exact derivatives for f = exp(x).
Derivative Exact value Numerical value Approximation error
derivatives within a given data sample, the exponential function is used to do this test due to its simplicity in analytical
differentiation. The data sample f to be used by the program is generated at a total of N = 50 equispaced nodes beginning
at x1 = 0 and with the step size h = 0.1; see the code listing lines 12–17. The tabulated data sample f is introduced into the
Matlab code and the derivatives of different degrees and orders of accuracy are calculated. Table 2 shows a sample of the
code outputs with a floating point precision of 20 decimal numbers. The calculated arbitrary degree and order derivatives
of f at randomly chosen locations in the solution domain are compared with the exact analytical derivatives. As seen from
Table 2, the numerical errors from the approximation are of the same selected order of accuracy, with error ≈ hO . Moreover,
it is important to use high precision of calculations when calculating high degree and high order derivatives. That is because
the output is highly sensitive to the calculation precision used.
7. Conclusions
In this present work, we introduced an algorithm and a computer program for calculating numerically the derivatives of
discrete functions with arbitrary degree and order of accuracy using a closed explicit formula. The derivation of the algorithm
is based on the undetermined coefficients method and the closed form of the Vandermonde matrix inverse as a function of
the elementary symmetric functions. The numerical testing showed good results when various degree differentiations with
various approximation orders were calculated.
References
[1] J. Li, General explicit difference formulas for numerical differentiation, J. Comput. Appl. Math. 183 (1) (2005) 29–52.
[2] C. Jordan, Calculus of Finite Differences, second ed., Chelsea, New York, 1950.
[3] L. Collatz, The Numerical Treatment of Differential Equations, second ed., Springer, Berlin, 1966.
[4] S.C. Chapra, R.P. Canale, Numerical Methods for Engineers, sixth ed., McGraw-Hill, 2010.
[5] J.D. Hoffman, Numerical Methods for Engineers and Scientists, second ed., Marcel Dekker, New York, 2001.
[6] I.R. Khan, R. Ohba, Taylor series based finite difference approximations of higher-degree derivatives, J. Comput. Appl. Math. 154 (2003) 115–124.
[7] G. Dahlquist, A. Bjorck, Numerical Methods, Prentice Hall, Englewood Cliffs, 1974.
[8] C.F. Gerald, P.O. Wheatley, Applied Numerical Analysis, fifth ed., Addison-Wesley, 1994.
[9] I.R. Khan, R. Ohba, New finite difference formulas for numerical differentiation, J. Comput. Appl. Math. 126 (2000) 269–276.
[10] L.B. Rail, Automatic Differentiation: Techniques and Applications, in: Lect. Notes Comput. Sci., vol. 120, 1981.
[11] H.B. Christian, H.M. Bücker, P. Hovland, U. Naumann, J. Utke, Advances in Automatic Differentiation, Springer-Verlag, Berlin, Heidelberg, 2008.
[12] A. Walther, Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, second ed., SAIM, Philadelphia, PA, 2008.
[13] S. Lu, S.V. Pereverzev, Numerical differentiation from a viewpoint of regularization theory, Math. Comp. 75 (256) (2006) 1853–1870.
[14] J. Cullum, Numerical differentiation and regularization, SIAM J. Numer. Anal. 8 (2) (1971) 254–265.
[15] A.G. Ramm, A.B. Smirnova, On stable numerical differentiation, Math. Comp. 70 (235) (2001) 1131–1154.
[16] A. Sidi, Extrapolation methods and derivatives of limits of sequences, Math. Comp. 69 (229) (1999) 305–324.
[17] W.G. Bickley, Formulae for numerical differentiation, Gaz. Math. 25 (263) (1941) 19–27.
[18] H.B. Keller, V. Pereyra, Symbolic generation of finite difference formulas, Math. Comp. 32 (144) (1978) 955–971.
[19] R.T. Gregory, A method for deriving numerical differentiation formulas, Amer. Math. Monthly 64 (2) (1957) 79–82.
[20] A. Spitzbart, N. Macon, Numerical differentiation formulas, Amer. Math. Monthly 64 (10) (1957) 721–723.
[21] I.R. Khan, R. Ohba, N. Hozumi, Mathematical proof of closed form expressions for finite difference approximations based on Taylor series, J. Comput.
Appl. Math. 150 (2003) 303–309.
H.Z. Hassan et al. / Journal of Computational and Applied Mathematics 236 (2012) 2622–2631 2631
[22] I.R. Khan, R. Ohba, Closed-form expressions for the finite difference approximations of first and higher derivatives based on Taylor series, J. Comput.
Appl. Math. 107 (1999) 179–193.
[23] M.E.A. El-Mikkawy, Explicit inverse of a generalized Vandermonde matrix, Appl. Math. Comput. 146 (2003) 643–651.
[24] F.J. Hickernell, S. Yang, Simplified analytical expressions for numerical differentiation via cycle index, J. Comput. Appl. Math. 224 (2009) 433–443.
[25] H. Bai, A. Xu, F. Cui, Representation for the Lagrangian numerical differentiation formula involving elementary symmetric functions, J. Comput. Appl.
Math. 231 (2009) 907–913.
[26] Dr. Jianping Li’s Home Page, Subroutines, numerical differentiation. http://www.lasg.ac.cn/staff/ljp/subroutine/differentiation_uneven.f (last accessed
07.09.11).