Numerical Analysis 4 Class Environmental Department College of Engineering 2016-2017

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Numerical Analysis

4th Class
Environmental
Department
College Of Engineering
2016-2017

1/12/2017 1
Content: Numerical Analysis

“Solution of Algebraic and Transcendental Equations ”


•Locate of the root
• Method of Bisection
•Secant Method
• False position method
•Newton-Raphson Method
•Fixed Point Method or “Iterative Method”
• Newton-Raphson for Two Equations
• Iterative Method for Two Equations

3
Nonlinear Equation
Solvers

Bracketing Graphical Open Methods

Bisection Newton Raphson


False Position
(Regula-Falsi) Secant

All Iterative
Part 2 3
Locating the Position of Roots

Let f(x) be a continuous function on the interval


[a, b], to locate the position of roots of the
function f ( x)  0 , we divide the interval [a, b] into
n subintervals as:
a  x0  x1  x2  ...  xn  b
xi  x0  i h , i  0,1, 2, ..., n
where
and we must start with an initial interval [a, b],
where f (a) and f (b) have opposite signs, then
there exists c  [a, b] which f (c)  0 .
10
Example: Find the approximate location root of
On the interval [ -8, 8].
f ( x)  x 4  7 x 3  3x 2  26x  10
i- Let n=4
xi  x0 8  (8) 16
h   4
n 4 4
x -8 -4 0 4 8
F(x) + + - - +

There exist a roots between these interval (-4, 0) and (4, 8).

ii- Let n=8


xi  x0 8  (8) 16
h   2
n 8 8
x -8 -6 -4 -2 0 2 4 6 8
F(x) + + + + - + - + +

Then there exist a roots between these interval (-2, 0), (0, 2), (2, 4) and
(4, 6).
Find the root of an equation
What Goes
Wrong?

Tangent point: Singularity: Pathological case:


very difficult brackets don’t infinite number of
to find surround root roots – e.g. sin(1/x)

1/12/2017 CSE 330: Numerical Methods 6


Some Numerical Methods for Solving Non-Linear Equations
Bisection Method
Theorem
If f(x) = 0, A function f(x) is continuous between two points a and b,
 f(a) and f(b) are of opposite sign,
Then, there exists at least one root of f(x) between a and b (i.e., where f(x) intersects the x-axis, since f(x) = 0).

1/12/2017 CSE 330: Numerical Methods 7


Bisection Method
Theorems:
1. If function f(x) in f(x)=0 does not change sign
between two points, roots may still exist between
the two points.

2. If the function f(x) in f(x)=0 does


not change sign between two points,
there may not be any roots between
the two points.
3. If the function f(x) in f(x)=0 changes
sign between two points, more than one
root may exist between the two points.

1/12/2017 8
Bisection Method

• Let f(a) and f(b) has opposite signs.


• Find xc = (a+b)/2.
• If f(xc) = 0, then xc is the root of the equation.
• Otherwise, the root lies between
• a and xc [if a and xc has opposite signs], or
• xc and b [if xc and b has opposite signs].
• We continue this process until we find the root (i.e.,
xc = 0), or the latest interval is smaller than some
specified tolerance.

1/12/2017 CSE 330: Numerical Methods 9


Bisection Method

f(b) is positive

xc= (a+b)/2

f(c) negative

f(a) negative What will be the next interval?

1/12/2017 CSE 330: Numerical Methods 10


Bisection Method: Example 1
Find the root of the equation x3 + 4x2 – 1 = 0.
Solution
Let, a = 0 and b = 1.

Now, f(0) = (0)3 + 4(0)2 – 1 = -1 <0 and


f(1) = (1)3 + 4(1)2 – 1 = 4 >0.
i.e., f(a) and f(b) has opposite signs.
Therefore, f(x) has a root in the interval [a, b] = [0, 1]

xc = (0 + 1) /2 = 0.5,
f(0.5) = 0.125. Now f(a) and f(xc) has opposite signs
So, the next interval is [0, 0.5]

1/12/2017 CSE 330: Numerical Methods 11


Bisection Method: Example 1
Find the root of the equation x3 + 4x2 – 1 = 0.
Solution
a b xc = (a+b)/2 f(a ) f(b ) f(x c )
0 1 0.5 -1 4 0.125
0 0.5 0.25 -1 0.125 - 0.73438
0.25 0.5 0.375 -0.73438 0.125 - 0.38477
0.375 0.5 0.4375 -0.38477 0.125 - 0.15063
0.4375 0.5 0.46875 -0.15063 0.125 - 0.0181
0.46875 0.5 0.484375 -0.0181 0.125 0.05212
0.46875 0.484375 0.476563 - 0.0181 0.05212 0.01668

... and so we approach the root 0.472834.


1/12/2017

12
Bisection Method: Class Work
Ex.1 Find the approximate value of the following:
1 x3  4 x  1  0 where   0.001
2 e x  3x  0
3 x  cos(x)  0
4 x tan( x)  1
5 f ( x)  log( x)  x  5
Ex.2 Find the real root of the equation f(x)=x3 – x – 1=
0 correct to 2 decimal places.(=0.01). Ans.: 1.325683

Ex.3 Find the real root of the equation f(x)=x4 - cos(x) + x = 0


correct to 2 decimal places.(=0.01), Ans.: 0.637695
Bisection Example
Linear Interpolation Methods
• Most functions can be approximated by a
straight line over a small interval. The two
methods :- secant and false position are based
on this.
1.Secant Method
Secant Method – Derivation
f(x)

f(xi)
x f x  Newton’s Method
i, i
f(xi )
xi 1 = xi - (1)
f (xi )
f(xi-1) Approximate the derivative
 f ( xi )  f ( xi 1 )
X f ( xi )  (2)
xi  xi 1
xi+2 xi+1 xi

Figure 1 Geometrical illustration of Substituting Equation (2)


the Newton-Raphson method. into Equation (1) gives the
Secant method
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )

16
Secant Method – Derivation
The secant method can also be derived from geometry:

f(x)

The Geometric Similar Triangles


AB DC
f(xi) B

AE DE
can be written as
f ( xi ) f ( xi 1 )

f(xi-1) C
xi  xi 1 xi 1  xi 1
E D A X
xi+1 xi-1 xi
On rearranging, the secant
Figure 2 Geometrical representation of method is given as
the Secant method.
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )

17
Algorithm for Secant Method
Step 1
Calculate the next estimate of the root from two initial guesses
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 ) xi 1- xi
Find the absolute relative approximate error a =  100
xi 1
Step 2
Find if the absolute relative approximate error is greater
than the prespecified relative error tolerance.
If so, go back to step 1, else stop the algorithm.
Also check if the number of iterations has exceeded the
maximum number of iterations.

18
Algorithm
• Given two guesses x0, x1 near the root,
• If f x0   f x1  then
• Swap x0 and x1.
• Repeat
x0  x1
• Set 2 1 x  x  f  x  *
f  x0   f  x1 
1

• Set x0 = x1
• Set x1 = x2
• Until f x2  < tolerance value.
Discussion of Secant Method
• The secant method has better convergence
than the bisection method .
• Because the root is not bracketed there are
pathological cases where the algorithm
diverges from the root.
• May fail if the function is not continuous.
Example :1
As an example of the secant method, suppose we wish to find a root of the function
f(x) = cos(x) + 2 sin(x) + x2. A closed form solution for x does not exist so we must use
a numerical technique. We will use x0 = 0 andx1 = -0.1 as our initial approximations.
We will let the two values εstep = 0.001 and εabs = 0.001 and we will halt after a
maximum of N = 100 iterations.
We will use four decimal digit arithmetic to find a solution and the resulting iteration is
shown in Table 1.
Table 1. The secant method applied to f(x) = cos(x) + 2 sin(x) + x2.

xn 1  xn  f  xn 
 xn 1  xn 
f  xn 1   f  xn 
Thus, with the last step, both halting
conditions are met, and therefore, after
six iterations, our approximation to the
root is -0.6595

x0  x1
x2  x1  f  x1 *
f  x0   f  x1 
Example: Use secant method of finding roots of the following
equations
1 x 3  0.1654 x 2  3.993 10 4  0 where   0.001
2 e x  3x  0
3 x 2  10 cos(x)  0
4 x tan( x)  1
5  f ( x)  x 3  3 x  2
• Solution 1:
First we can find the interval which a roots contain it as:
[ 0.02, 0.05] when x0=0.02 and x1=0.05
Step:1 Let n=1
x0 f ( x1 )  x1 f ( x0 ) 0.02 f (0.05)  0.05 f (0.02)
x2    0.06461
f ( x1 )  f ( x0 ) f (0.05)  f (0.02)
Regula-Falsi Method Or False Position
Type of Algorithm (Equation Solver)
The Regula-Falsi Method (sometimes called the False Position Method) is a
method used to find a numerical estimate of an equation.
This method attempts to solve an equation of the form f(x)=0. (This is very
common in most numerical analysis applications.) Any equation can be written in
this form.
Algorithm Requirements
This algorithm requires a function f(x) and two points a and b for which f(x) is
positive for one of the values and negative for the other. We can write this
condition as f(a)f(b)<0.
If the function f(x) is continuous on the interval [a,b] with f(a)f(b)<0, the algorithm
will eventually converge to a solution.
This algorithm can not be implemented to find a tangential root. That is a root that
is tangent to the x-axis and either positive or negative on both side of the root.
For example f(x)=(x-3)2, has a tangential root at x=3.
False Position
• The method of false position is seen as an improvement on the
secant method.
• The method of false position avoids the problems of the secant
method by ensuring that the root is bracketed between the two
starting points and remains bracketing between successive
pairs.

This technique is similar to the


bisection method except that
the next iterate is taken as the
line of interception between the
pair of x-values and the x-axis
rather than at the midpoint.
False Position
Algorithm
• Given two guesses x0, x1 that bracket the root,
• Repeat
• Set x2  x1  f x1 * x0  x1
f  x0   f  x1 
• If f x  is of opposite sign to f x  then
2 0

• Set x1 = x2
• Else Set x0 = x1
• End If
• Until f x2  < tolerance value.
This method achieves better convergence but a more complicated algorithm.
May fail if the function is not continuous.
Example
Lets look for a solution to the equation x3-2x-3=0.
We consider the function f(x)=x3-2x-3
On the interval [0,2] the function is negative at 0 and positive at 2. This means
that a=0 and b=2 (i.e. f(0)f(2)=(-3)(1)=-3<0, this means we can apply the
f (0)2  0
algorithm).
 3(2) 6 3
xrfp  0   
f (2)  f (0) 1  3 4 2
 3   21 This is negative and we will make the a =3/2
f ( xrfp )  f    and b is the same and apply the same thing
2 8 to the interval [3/2,2].

3 f  32 2  32  3 821  12  3 21 54
xrfp       
2 f (2)  f  2  2 1  8
3  21
2 58 29
 54  This is negative and we will make the a =54/29
f ( xrfp )  f    0.267785 and b is the same and apply the same thing to
 29  the interval [54/29,2].
Example 1
Consider finding the root of f( x) = x2 - 3. Let εstep = 0.01, εabs = 0.01 and start with the
interval [1, 2].
Table 1. False-position method applied to f( x) = x2 - 3.

Thus, with the third iteration, we note that the last step 1.7273 → 1.7317 is less
than 0.01 and |f(1.7317)| < 0.01, and therefore we chose b = 1.7317 to be our
approximation of the root.
Note that after three iterations of the false-position method, we have an acceptable
answer (1.7317 where f(1.7317) = -0.0044) whereas with the bisection method, it
took seven iterations to find a (notable less accurate) acceptable answer (1.71344
where f(1.73144) = 0.0082)
Example 2
Consider finding the root of f(x) = e-x(3.2 sin(x) - 0.5 cos(x)) on the interval [3, 4], this
time with εstep = 0.001, εabs = 0.001.
Table 2. False-position method applied to f(x) = e-x(3.2 sin(x) - 0.5 cos(x)).

Thus, after the sixth iteration, we note that the final step, 3.2978 → 3.2969 has a size
less than 0.001 and |f(3.2969)| < 0.001 and therefore we chose b = 3.2969 to be our
approximation of the root.
In this case, the solution we found was not as good as the solution we found using the
bisection method (f(3.2963) = 0.000034799) however, we only used six instead of
eleven iterations.
END Lecture
5- Newton-Raphson Method
Given an approximate value of a root of an equation, a better and closer
approximation to the root can be found by using an iterative process called
Newton’s method or Newton-Raphson method.
Let x=x0 be an approximate value of one root of the equation f(x)=0. If x=x1 is
the exact value then
f(x1)=0 ……………. (1)
Where the difference between x0 and x1 is very small called h then
x1=x0+h
f(x1)=f(x0+h)=0
Expanding by Taylor’s theorem we get
h2 h3 …….. (2)

f ( x0 )  h f ( x0 )   
f ( x0 )   
f ( x 0 )  ...  0
2 6
Since h is small, neglecting all the powers of h above the first form (2), we get

f ( x0 )  h f ( x0 )  0 approximately
f ( x0 ) f ( x0 )
h  x1  x0  h  x0 
f ( x0 ) f ( x0 )
Similarly x2 denotes a better approximation, starting with x1 , we get
f ( x1 )
x 2  x1 
f ( x1 )
Proceeding in this way we get
f ( xn )
xn 1  xn 
f ( xn )
The above is a general formula, known as Newton-Raphson formula,
Geometrically, Newton’s method is equivalent to replacing a small arc of
the curve y=f(x) by a tangent line drawn to a point of the curve.

Theorem 1: Let p be an unknown root of f(x)=0 and xn an n-th approximation p


in Newton-Raphson method, assume the I=[a, b] is an interval containing p
and xn such that:
a- f ’(x)≠0 , for all x  I , either f ’’ <0 or else f ’’ >0 , for all x  I .
b- There are positive number m and M such that
M
m  f ( x) and f ( x)  M for all x in I, then e n 1  e n2
2m
Where ei  p  xi , i=n, n+1
Proof: H.W.
Question: Show that Newton-Raphson quadratically converges.
Example: Find the root of the ex = 4x , where  =0.01.
Solution: First find the interval and approximate value
x0=2.1
Let f(x)= ex - 4x , f ‘(x)= ex - 4x
Step.1 f(2.1)=e2.1-4(2.1)=-0.23383 f ( x0 ) (0.23383)
x  x   2 . 1   2.1561
f ‘(2.1)=e2.1-4 = 3.16617 1 0

f ( x0 ) 4.16617
Step.2
f(2.1561)=e2.1561-4(2.1561)=0.0129861 f ( x1 ) (0.0129861)
x  x   2.1561   2.1533
f ‘(2.1561)=e2.1561-4 =4.6373861 2 1
f ( x1 ) 4.6373861
and x3=2.1532
Since / x3-x2 / = / 2.1532-2.1533 /= 0.0001 <  =0.01
Notes:

1- Newton’s algorithm is widely used because at least in the


near neighborhood of a roots, it is more rapidly convergent than
away of the methods so far discussed.
2- That method is quadratically convergent.
3- The method may converge to a root different from the expected
one or diverge if the starting value is not close enough to the
root.
4- Criterion for ending the iteration. The decision of stopping the
iteration depends on the accuracy desired by the user. If 
denotes the tolerable error, when
xn1  xn 
Exercises:-

1- Drive Secant method by using Newton’s Raphson method.


2- Find the real roots of f(x)= x3 +5 x2 -3 x -4 , correct to three
decimals.
3- Use Newton-Raphson method to find a general term of the
following:
i. k c where c and k are constants.

1
ii. k
where k are constant.

4- Use Newton-Raphson method to find the approximate value of


the following where  =0.01:
a. x= cos(x) b. x- e-x =0 c. 3 tan (3x) =3x+1
6- Iterative method: “Fixed point Iterations”
Suppose we have an equation
f(x)=0 ……… (1)
Whose roots are to be determined. The equation (1) can be expressed as
x=g(x) ………. (2)
Putting x=x0 in R.H.S. of (2) we get the first approximation
x1= g(x0)
The successive approximation are then given by
x2= g(x1)
x3= g(x2)
.
.
xn= g(xn-1)
where the sequence of approximations x1 , x2 , x3 , … , xn always converges
to the root of x=g(x) and it can be shown that, if g  ( x)  1 when x is
sufficiently close to the exact value c of the root and
xn  c as n  
Note: 1- The smaller value of g  (x) the more rapid will be the convergence.
2- From equation (1) we have the relation
xn  a   xn1  a , where  is a constant
Hence the error at any stage is proportional to the error in the previous
stage. Therefore the Iteration method has a linear convergence.
3- Iteration method is more useful for finding the roots of an equation
which is in the form an infinite series.

The sufficient condition for convergence:-


Let λ be a fixed point to g(x) “i.e. λ =g(λ)”
x1= g(x0)
λ - x1 = g(λ) - g(x0)= (λ-x0) g’(θ0) , where θ0 is between λ and x0 . “by M.V.Th.”
λ – x2 = g(λ) - g(x1)= (λ-x1) g’(θ1)= (λ-x0) g’(θ1) g’(θ0) , where θ1 is between λ
and x1 . “by M.V.Th.”.
.
.
.
λ – xn = g(λ) - g(xn-1)= (λ-xn-1) g’(θn-1)= (λ-x0) g’(θ0) g’(θ1) . . . g’(θn-1)
Let L=max / g’(θi) / , i=1, 2, 3, … , n-1.
/λ – xn / =/ g(λ) - g(xn-1)/ = /(λ-xn-1)/ /g’(θn-1)/= /λ-x0/ /g’(θ0)/ /g’(θ1)/ . . . /g’(θn-1)/
≤ Ln /λ-x0 /
The value /λ – xn / approaches to zero and this happen if L<1 is sufficient
condition to form convergence.
Hence /g’(x0)/ ≤ L<1
/g’(x0)/ <1 , is sufficient condition to find the form g(x) for finding a root of f(x).

Theorem:- If g( [a, b])  [a, b] and /g’(x)/ ≤ L<1 for all x [a, b], then there
exist exactly one λ in [a, b] such that
λ=g(λ)
Proof: Exercise
Geometrically:
A solution of f(x)=0 is an intersection point of the graph f with the x-axis,
and the convergence test which are :-
1- If /g’(x0)/ is near 1 then the converge is slow “ monotonically ”.
2- If -1 < g’(x0) < 0 then we have an oscillatory convergence.
3- If g’(x0) is positive the convergence will be increasing or decreasing.
4- If g’(x0) >1 , then we will have diverges.

For example: Use iterative method to find the approximate roots, and determine
the convergence types of f(x) = x2 -2 x -3=0
Solution:
Numerical Solution of Non-Linear Simultaneous Equations
Consider the system involving two non-linear equations:
f(x, y)=0
g(x, y)=0 …………. (1)
With the initial approximate solution (x0 , y0), we will study three numerical
methods to solve this type of equations.
1- Iterative method:
Let f(x, y)=0
g(x, y)=0 …………. (1)
First can be written as the form
x=F(x , y)
y=G(x , y)
Starting by initial value (x0 , y0), can be extended to approximate solution
(xi , yi) of the iterative formula
xi+1=F(xi , yi)
yi+1=G(xi , yi) where i=1,2,3,…
Stop the iteration when xi 1  xi  and yi 1  yi 
What is a sufficient conditions for convergence of iteration method for solving
non-linear equations:

F G
 1
x ( x0 , y 0 ) x ( x0 , y 0 )

and
F G
  1
y ( x0 , y 0 )
y ( x0 , y 0 )

Example: Find approximate solution of the following systems:


1- f(x, y) = 2x2 – xy - 5x +1 =0
g(x, y) = x +3 Log(x)- y2 =0 where (x0 , y0)= (3.5 , 2.5)

2- x2 – y2 -7=0
x2 + y2 - 25=0 where (x0 , y0)= (3 , 4)
Solution:
Let f(x , y) =2x2 – xy - 5x +1 =0
g(x, y) = x +3 Log(x)- y2 =0 …………… (1)
From equation (1) we have
x ( y  5)  1
x  F ( x , y)
2
y  x  Log ( x)  G ( x , y )

And since (x0 , y0)= (3.5 , 2.5)


F F G G
( ) ( x0 , y0 )  0.5276 , ( ) ( x0 , y0 )  0.246 , ( ) ( x0 , y0 )  0.3029 , ( ) ( x0 , y0 )  0
x y x y
F G
  0.8305  1
x ( x0 , y 0 ) x ( x0 , y 0 )

and
F G
  0.246  1
y ( x0 , y 0 )
y ( x0 , y 0 )
2- Newton-Raphson Method

Consider the system of two nonlinear equations in two variables


Let f(x, y)=0
g(x, y)=0 …………… (1)
Be two nonlinear equation in x and y. Knowing that (x0 , y0) is an approximate
solution of (1), we have to find a better approximate solution of the
system(1).
Let (x0 +h , y0+k) be the exact solution of the system (1).
Therefore f(x0 +h , y0+k)=0
g (x0 +h , y0+k)=0
By using Taylor’s expansion
f x ( x0 , y 0 ).( x  x0 )  f y ( x0 , y 0 ). ( y  y 0 )   f ( x0 , y 0 )
g x ( x0 , y 0 ).( x  x0 )  g y ( x0 , y 0 ). ( y  y 0 )   g ( x0 , y 0 )

Using Grammer’s rule to find


 f fy
fx  f
g gy
and gx g
x  x0 
( x0 , y 0 )
y  y0 
( x0 , y 0 )
fx fy fx fy
gx gy gx gy
( x0 , y 0 )
( x0 , y 0 )

We can find the approximate solution


x1=x0+h and y1=y0+k
And all functions involved are to be evaluated at (xi , yi)
We stop the iteration
if xi 1  xi  and yi 1  yi  for all i.
Notes: A set of conditions sufficient to ensure convergence is the following:
1- Consider f(x, y) and g(x, y) with their derivatives are continues in a closed
bounded region.
2- The denominator does not vanish in R.
3- The initial approximation (x0 , y0) is chosen sufficiently close to the root.
Example: Find approximate solution of the following systems:

1- f(x, y) = 2x2 – xy - 5x +1 =0
g(x, y) = x +3 Log(x)- y2 =0 where (x0 , y0)= (3.5 , 2.5)

2- x2 – y2 -7=0
x2 + y2 - 25=0 where (x0 , y0)= (3 , 4)

3- x2 – 2x- y + 0.5=0
x2 + 4y2-4=0 at (2 , 0.25)
4- x2 + y2-2=0
xy-1=0 at (1 , 1)
Summary
Method Pros Cons
Bisection - Easy, Reliable, Convergent - Slow
- One function evaluation per - Needs an interval [a,b]
iteration containing the root, i.e.,
- No knowledge of derivative is f(a)f(b)<0
needed
Newton - Fast (if near the root) - May diverge
- Two function evaluations per - Needs derivative and an
iteration initial guess x0 such that
f’(x0) is nonzero

Secant - Fast (slower than Newton) - May diverge


- One function evaluation per - Needs two initial points
iteration guess x0, x1 such that
- No knowledge of derivative is f(x0)- f(x1) is nonzero
needed

46

You might also like