Numerical Analysis
Numerical Analysis
Numerical Analysis
❑ Course webpage:
❑ https://relate.cs.illinois.edu/course/cs450-s22/
❑ TAs:
⇢
@u @u initial conditions
= c +
@t @x boundary conditions
❑ Examples
❑ Weather prediction (data assimilation)
❑ Image recognition
❑ Etc.
• Conditioning of a problem
• Condition number
• Stability of an algorithm
• Errors
– Relative / absolute error
– Total error = computational error + propagated-data error
– Truncation errors
– Rounding errors
• Floating point numbers: IEEE 64
• Floating point arithmetic
– Rounding errors
– Cancellation
Main Topics / Take-Aways for Chapter 1 (2/2)
Well-Posed Problems
General Strategy
Sources of Approximation
Before computation
modeling
empirical measurements
previous computations
During computation
truncation or discretization
rounding
Example: Approximations
absolute error
Relative error :
true value
• Take m = 2:
• Taylor series
series are
are fundamental
fundamentaltotonumerical
numericalmethods
methods and
and analysis.
analysis.
f (x + h) f (x) h
= f 0 (x) + f 00 (⇠)
| h
{z } | {z } 2
• Newton’s
Newton’s method,
method, optimization
optimization algorithms,
algorithms,
desired result and
andnumerical
numericalsolution of of
solution
computable
di↵erential
di↵erential equations
equationsallallrely
relyon
onunderstanding
understandingthe behavior
the of of
behavior functions
functions
in the neighborhood
neighborhoodofofaaspecific
specificpoint
pointororsetsetofofpoints.
points.
• Truncation error: | h2 f 00 (⇠)| ⇡ h2 f 00 (x) as h ! 0.
• In essence, numerical methods convert calculus from the continuous back
Q: Suppose |f 00 (x)| ⇡ 1.
to the discrete.
30
Can we take h = 10 and expect
• ( A way of avoiding caculus. :) )
f (x + h) f (x) 10 30
f 0 (x) 2 ?
h
a ⇠ 2 [x, x + h] such that
Truncation
0 h2 Error
00 Example
hm
f (x + h) = f (x) + hf (x) + f (x) + · · · + f (m)(⇠).
2 m!
❑ Recall Taylor series:
• Take m = 2:
• If f (k) exists (is bounded) on [x, x + h], k = 0, . . . , m, then there exists
a ⇠f (x + h)
2 [x, x + h]f (x)
such=that f 0 (x) +
h 00
f (⇠)
| h
{z } | {z } 2
| {z } m
2
desired h
result truncation h
h) = f (x) + hf 0 (x) +
f (x +computable f 00 (x) + · · · +error f (m) (⇠).
2 m!
h 00 h 00
• Truncation error: 2 f (⇠) ⇡ 2 f (x) as h ! 0.
• Take m = 2:
h 00 h 00 2
– Tofbe precise,
(x + h) f
2 f
(x)(⇠) = 2 f (x)0 + O(h ) h
= f (x) + f 00 (⇠)
| h
{z } | {z } 2
• Can use the Taylor series desired
computable resultapproximations to f 0 (x), f 00 (x),
to generate
etc., by evaluating f at x, x ± h, x ± 2h.
• Truncation error: | h2 f 00 (⇠)| ⇡ h2 f 00 (x) as h ! 0.
• We then solve for the desired derivative and consider lim h ! 0.
Q: Suppose |f 00 (x)| ⇡ 1.
Q: Suppose |f 00 (x)| ⇡ 1.
Can we take h = 10 30 and expect
Can we take h = 10 30 and expect
f (x + h) f (x) 0 10 30
f (x + h) f (x) f 0
(x) 10 230 ?
h f (x) 2 ?
h
Truncation Error Example
• Take m = 2:
f (x + h) f (x) h
= f 0 (x) + f 00 (⇠)
| h
{z } | {z } 2
computable desired result
Q: Suppose |f 00 (x)| ⇡ 1.
30
Can we take h = 10 and expect
f (x + h) f (x) 10 30
f 0 (x) 2 ?
h
Truncation Error Example
TaylorTaylor
Series Series
(Very important for SciComp!)
(Very important for SciComp!)
❑ Recall Taylor series:
(
•• If
If f k)
f (k) exists
f ( k)
• exists
If (is
(is bounded)
exists onon
[x,[x,
(is bounded)
bounded) x+ x on
+
h],h], k =kx 0,
[x, = 0,
. . ., .m,
+. h], k. ,=m,0,then
then . .therethere
. , m, exists
then
exists there exist
2 [x,
aa ⇠⇠ 2 [x,
a xx⇠ +
+ h]
2 h] such
[x,such that
x + that
h] such that
2h2 2 mhm m
ff (x
(x ++ h)h) =+ fh)
f(x)
(x)=++fhf
hf 0
(x) h 00 h
(x)++·f· ··(x)·+ h
· ++ f· · f· (m)(⇠). h
f (x= 0
(x)(x) +++ 0
hf (x)
2f f(x)
00
+ 00
m!
(m)
(⇠).
+ f (m)(⇠).
2 2 m! m!
• Take •m
mTake
= 2:
= 2: m = 2:
f (x +f (x
h) + h)
f (x (x)f (x)
+f h) f (x) h h00 00
= = 0 f 0 (x)
f=(x) 0+ + f f(⇠)(⇠) h 00
| | {zh {z h } } | {z |} {z }f| {z
(x) 2 2+
} | {z }
f (⇠)
h
|computable
{z } desired result |2 {zerror
}
computable desired result
desired result Truncation
truncation error
computable truncation error
h 00 00 h 00h 00
•• Truncation error: | 2 f h2(⇠)|
Truncation error: ⇡⇡
f (⇠) 2hf 00(x) as has
f (x) h h
! 0.
00 ! 0.
• Truncation error: 2 f (⇠) ⇡ 2 f (x) as h ! 0.
2
f (x + h) f (x) 30
f 0 (x) 102 ?
A:
A: Only
Only if
if we
we can
h compute
can compute every
every term
term in
in finite-di↵erence
finite-di↵erence formula
formula
(our
(our algorithm)
algorithm) with
with sufficient
sufficient accuracy.
accuracy.
A: Only if we can compute every term in finite-di↵erence formula
(our algorithm) with sufficient accuracy.
Scientific Computing Sources of Approximation
Approximations Error Analysis
Computer Arithmetic Sensitivity and Conditioning
"
!"
!%
Round-off Truncation error
!"
error
!$
!"
(/(36+)../.
!#
!"
error
)../.
!&
!"
!!"
!"
!!%
!"
!!$
!" (.0123(,/1+)../. ./014,15+)../.
!!#
!"
!!&
!"
!!# !!$ !!% !!" !& !# !$ !% "
!" !" !" !" !" !" !" !" !"
step'()*+',-)
size, h
Michael T. Heath Scientific Computing 12 / 46
Scientific Computing Sources of Approximation
Approximations Error Analysis
Computer Arithmetic Sensitivity and Conditioning
2 ✏M
(/(36+)../. 1 00
!#
!" ⇠ f (x) ⇠ hf (x)
h 2
error
)../.
!&
!"
2 ✏M
!!"
!" ⇠ f (x)
h
!!%
!"
!!$
!" (.0123(,/1+)../. ./014,15+)../.
!!#
!"
!!&
!"
!!# !!$ !!% !!" !& !# !$ !% "
!" !" !" !" !" !" !" !" !"
step'()*+',-)
size, h
Michael T. Heath Scientific Computing 12 / 46
Round-Off Error
❑ In general, round-off error will prevent us from representing f(x)
and f(x+h) with sufficient accuracy to reach such a result.
Round-off
error
Scientific Computing Sources of Approximation
Approximations Error Analysis
Computer Arithmetic Sensitivity and Conditioning
Forward error : y = ŷ y
p
As approximation to y = 2, ŷ = 1.4 has absolute forward
error
ŷ = fˆ(x) = 1 x2 /2
Example, continued
For x = 1,
Condition number :
|relative change in solution|
cond =
|relative change in input data|
Condition Number
relative relative
/ cond ⇥
forward error backward error
Example: Sensitivity
x f 0 (x)
Using the formula, cond = , what is
f (x)
the condition number of the following?
f (x) = a x
a
f (x) =
x
f (x) = a + x
Condition Number Examples
x f 0 (x)
cond = ,
f (x)
xa
For f (x) = ax, f 0 = a, cond = ax = 1.
a 0 2 x x2
a
For f (x) = , f = ax , cond = a = 1.
x x
x·1 |x|
For f (x) = a + x, f 0 = 1, cond = a+x = |a+x| .
• The condition number for (a + x) is <1 if a and x are of the same sign,
but it is >1 if they are of opposite sign, and potentially 1 if the are
of opposite sign but close to the same magnitude.
This ill-conditioning is often referred to as cancellation.
f (x + h) f (x) 0 h 00
= f (x) + f (⇠).
h 2
Scientific Computing Sources of Approximation
Approximations Error Analysis
Computer Arithmetic Sensitivity and Conditioning
Stability
Accuracy
x̂ = x + x
• Number is represented as
✓ ◆
d1 d2 dp 1 E
x = ± d0 + + 2 + · · · p 1
– mantissa: d0 d1 · · · dp 1
– fraction: d1 d2 · · · dp 1
Normalization
❑ Decimal:
❑ 1.2345
❑ 9.814 x 10-2
❑ Binary:
Not normalized:
❑ 1.010 x 2-3
.00101
❑ 1.111 x 24
Binary Representation of ⇡
• In 64-bit floating point,
⇡64 ⇡ 1.1001001000011111101101010100010001000010110100011 ⇥ 21
• In reality,
⇡ = 1.10010010000111111011010101000100010000101101000110000100011010 · · · ⇥ 21
⇡ ⇡64 = 0.00000000000000000000000000000000000000000000000000000100011010 · · · ⇥ 21
Rounding Error
Scientific Computing
Floating-Point Numbers
Approximations
– mantissa: d0 d1 · · · dp
Computer Arithmetic
Floating-Point Arithmetic
1
Properties of Floating-Point Systems
– fraction: d1 d2 · · · dp 1
Floating-point number system is finite and discrete
• Sign, exponent,
Total number of normalized and
floating-point mantissa
numbers is are store
fixed-width fields of each floating-point
2( 1) p 1 (U L + 1) + 1
⇡ 2w , where w=number of bits in a word
Smallest positive normalized number: UFL =
L
Rounding Rules
If real number x is not exactly representable, then it is
approximated by “nearby” floating-point number fl(x)
This process is called rounding, and error introduced is
called rounding error
Two commonly used rounding rules
chop : truncate base- expansion of x after (p 1)st digit;
also called round toward zero
round to nearest : fl(x) is nearest floating-point number to
x, using floating-point number whose last stored digit is
even in case of tie; also called round to even
Round to nearest is most accurate, and is default rounding
rule in IEEE systems
< interactive example >
Michael T. Heath Scientific Computing 30 / 46
Scientific Computing
Floating-Point Numbers
Approximations
Floating-Point Arithmetic
Computer Arithmetic
Machine Precision
fl(x) = x (1 + ✏x)
|✏x| ✏mach
x = 3141592653589793238462643383279502884197169399375105820974944.9230781... = ⇡ ⇥ 1060
x̂ = 3141592653589793000000000000000000000000000000000000000000000.0000000... ⇡ ⇡ ⇥ 1060
x x̂ ⇡ 2.4 ⇥ 1044 .
p 53 16
p = 53 ✏mach = 2 = 2 ⇡ 10
p 53
p = 53
L ✏mach
1022 = 2 308= 2 ⇡ 10
L = 1022 UF L = 2 = 2 ⇡ 10
L 1022
L = 1022U U F L
1023 = 2 308= 2 ⇡ 10
U = 1023 OF L ⇡ 2 = 2 ⇡ 10
U = 1023 OF L ⇡ 2U = 21023 ⇡ 1030
❑ With normalization, the smallest (positive) number you can represent is:
❑ Similarly, for IEEE DP, OFL ~ 10308 >> number of atoms in universe.
à Overflow will never be an issue (unless your solution goes unstable).
Scientific Computing
Floating-Point Numbers
Approximations
Floating-Point Arithmetic
Computer Arithmetic
Exceptional Values
Floating-Point Arithmetic
Assume = 10, p = 6
Cancellation
For example,
x = 1 . 0 1 1 0 0 1 0 1 b b g g g g e
y = 1 . 0 1 1 0 0 1 0 1 b0 b0 g g g g e
x y = 0 . 0 0 0 0 0 0 0 0 b00 b00 g g g g e
= b00 . b00 g g g g ? ? ? ? ? ? ? ? ? e 9
Scientific Computing
Floating-Point Numbers
Approximations
Floating-Point Arithmetic
Computer Arithmetic
Cancellation, continued
Cancellation, continued
Cancellation, continued
x 2 x 3
ex = 1 + x + + + ···
2! 3!
for x < 0, may give disastrous results due to catastrophic
cancellation
– Can show (by Taylor series expansions) that those errors are benign.
1
Assuming f , f 0 and f 00 bounded on [x, x + h].
where R is a random variable with |R| < 2.
•• So,
Use computed f (x) + hf 0 (⇠) to
f (x + h) =approximation is write the first round-o↵ term as: 2
⌧
f (x + h)✏+ f (x) + hf 0f(⇠))✏+ ✏+ f✏(x)✏
(f 0 + 0 f (x)✏+
= = + = f (x) +
+ fO(✏
(⇠)✏
+ )
+ ⇡ .
h x h x h h h
h 00 ✏M
• Round-o↵ error is thus ⇡ f 0 (x) + f (x) + R f (x) .
|2 {z } | h{z }
f (x + h)✏+ f (x)✏0 ✏+ ✏0 TE ✏+
RE ✏0 ✏M
= f (x) + ✏+ f 0 (⇠) ⇡ f (x) = f (x) R,
h h h h
h 00 ✏M
⇡ f 0 (x) + f (x) + R f (x) .
2
| {z } h
| {z }
TE RE
• Notice, crucially, that the units of each term on the right match.
2
The unknown value of ⇠ is di↵erent from the one in the original Taylor series expansion.
f (x + h)(1 + ✏+ ) f (x)(1 + ✏0 )
=
h
Finite Di↵erences and
f (xTruncation/Round-O↵
+ h) f (x) Error
f (x + h)✏+ f (x)✏0
= + ,
• Taylor Series: 1 h h
✏+ , ✏0f random
f (x +variables
h) f (x) of arbitrary
0
sign with hmagnitude
00
✏M .
:= = f (x) + f (⇠)
x | h
{z } | {z } 2
| {z }
• This computed expression
computable ignores
desired round-o↵
result in x, h,error
truncation x+h
and division.
Can show (by Taylor series expansions) that those errors are benign.
f df
• •
Use f (x + h) = f (x) + hf=0 (⇠)
So, expect e abs := x dx O(h) ! 0 asthe
to write ! 0. round-o↵
h first term as:
f (xvalue
+ h)✏ 0
• Computed of+ fx , using (f (x) + hf (⇠))✏+ f (x)✏+ f (x)✏+
= standard model, = + f 0 (⇠)✏+ ⇡ .
h⌧ h h h
f hf (x + h)i hf (x)i
=
x h
• So, computed approximation is + ✏+ ) f (x)(1 + ✏0 )
f (x + h)(1
=
⌧ h
f f ✏
f (x ✏
+ + h) 0 f (x) f (x + h)✏+ 0f (x)✏0 h 00 ✏M
= =+ f (x) + + O(✏+ ) ⇡ f (x) + , f (⇠) + R f (x).
x x h h h 2 h
– ✏+ , ✏0 random variables of arbitrary sign with magnitude ✏M .
where |R| < 2.
– This computed expression ignores round-o↵ in x, h, x + h and division.
– Can show (by Taylor series expansions) that those errors are benign.
1
Assuming f , f 0 and f 00 bounded on [x, x + h].