Solution Techniques and Error Analysis of General Classes of Partial Differential Equations
Solution Techniques and Error Analysis of General Classes of Partial Differential Equations
Solution Techniques and Error Analysis of General Classes of Partial Differential Equations
EQUATIONS
by
Wijayasinghe Arachchige Waruni Nisansala Wijayasinghe
A thesis
submitted in partial fulfillment
of the requirements for the degree of
Master of Science in Mathematics
Boise State University
May 2016
© 2016
Wijayasinghe Arachchige Waruni
Nisansala Wijayasinghe
ALL RIGHTS RESERVED
BOISE STATE UNIVERSITY GRADUATE COLLEGE
Thesis Title: Solution Techniques and Error Analysis of General Classes of Partial
Differential Equations
The following individuals read and discussed the thesis submitted by student
Wijayasinghe Arachchige Waruni Nisansala Wijayasinghe, and they evaluated her
presentation and response to questions during the final oral examination. They found that
the student passed the final oral examination.
The final reading approval of the thesis was granted by Barbara Zubik-Kowal, Ph.D.,
Chair of the Supervisory Committee. The thesis was approved for the Graduate College
by John R. Pelton, Ph.D., Dean of the Graduate College.
DEDICATION
සහෘදයාෙණB
I like to express my appreciation to Dr. Uwe Kaiser and Dr. Randall Holmes for
their valuable suggestions and for having served on my committee.
My thanks also goes to Dr. Leming Qu, Dr. Jodi Mead, Dr. Grady Wright, Dr.
Jaechoul Lee, and Dr. Marion Scheepers for helping me with my coursework and
research.
Finally, I would like to thank my parents and my husband for their love and en-
couragement throughout this work.
v
ABSTRACT
vi
TABLE OF CONTENTS
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
vii
1
CHAPTER 1
INTRODUCTION
broadly applied in many scientific disciplines such as biology, medicine, physics, engi-
neering, economics, etc. Throughout its long history, functional di↵erential equations
have been investigated by many authors with respect to a multitude of aspects, about
which we refer the reader to [1], [2], [5]-[10], [12], [14]-[21], [23]-[28], [30], [31], [38]
for ordinary functional di↵erential equations and [3], [4], [11], [13], [22], [29]-[38]
for partial functional di↵erential equations. Aspects connected with modeling with
functional di↵erential equations are presented in [3], [4], [13], [14], [24]. One of the
main problems is that many of the equations have no analytical formulae for the exact
solutions and it has become essential to study their approximations in order to gain
insight about the solutions to the model equations and to conduct numerical simula-
tions. In order to get reliable approximate solutions, careful mathematical analysis of
their errors has to be conducted. In this thesis, we expand on the development of [37]
by filling in the details of the proofs to investigate errors of approximate solutions to
a class of parabolic partial functional di↵erential equations. For similar developments
and proof techniques for partial functional di↵erential equations as well as numerical
experiments for this class of equations, we refer the reader to [4, 22, 34, 35, 36, 38].
Originating from a multitude of areas of application to the real world around us,
partial functional di↵erential equations form a general class of problems that includes
partial di↵erential equations as one if its subclasses.
where u 2 C(B, R) is an unknown function (see below for B), x 2 [ L, L], t 2 [0, T ]
and f : [ L, L] ⇥ [0, T ] ⇥ C(D, R) ⇥ R ⇥ R ! R, represents any given continuous
3
function. Another generalization in (1.1) is introduced by the argument u(x,t) ; for any
fixed x 2 [ L, L] and t 2 [0, T ], u(x,t) is a function. Such a functional argument allows
to generate di↵erential equations with e.g. a time delay and shift in space. Unlike for
classical partial di↵erential equations, the third argument u(x,t) in (1.1) is not a real
value but a real function defined on D (see Figure 1 for D) and called a functional
argument. The functional argument u(x,t) 2 C(D, R) for x 2 [ L, L], t 2 [0, T ], and
u 2 C(B, R), with L, T > 0, B = [ L̂, L̂] ⇥ [ ⌧0 , T ], D = [ ⌧ˆ, ⌧ˆ] ⇥ [ ⌧0 , 0], ⌧ˆ, ⌧0 0,
L̂ = L + ⌧ˆ, is defined as
The partial di↵erential equation (1.1) describes a general class of problems. For
example, if f is defined by
where ✏ is a positive constant, then (1.1) can be written in the following form
4
@u @ 2u
(x, t) = ✏ 2 (x, t) + u(x, t)(1 u(x, t ⌧0 )).
@t @x
Z 0 Z ⌧ˆ
f (x, t, !, p, q) = a(t)q + a(t) !(s, ⌧ )dsd⌧, (1.5)
⌧0 ⌧ˆ
Z 0 Z ⌧ˆ
@u @ 2u
(x, t) = a(t) 2 (x, t) + a(t) u(x + s, t + ⌧ )dsd⌧.
@t @x ⌧0 ⌧ˆ
@u ⇣ @u @ 2u ⌘
(x, t) = f˜ x, t, u(x, t), (x, t), 2 (x, t) ,
@t @x @x
Space(x)
L+𝜏̂
Boundary
𝜏̂
D (x,t)
𝜏̂ 𝜏̂
D T Time(t)
−𝜏0 (0,0)
−𝜏̂
D (x,t)
-L
Boundary
-L-𝜏̂
CHAPTER 2
DEFINITIONS
The general class of equations given by (1.1) is written in terms of arbitrary functions
f and, in many cases, analytic solutions to these equations defined in the continuum
sense are unknown and approximated by numerical solutions computed on discrete
subsets. For any element of any of the discrete subsets (such elements are referred
to as grid-points), there exists an open neighborhood that is disjoint from the other
grid-points. The discrete subsets are finite and determined by parameters, a↵ecting
the coarseness of the corresponding discretizations. In the literature, the process of
semi-discretization has been also referred to as the Method of Lines. Letting the
values of these parameters approach zero causes the corresponding discretizations
to become finer. The goal of the thesis is to study error bounds of the approximate
solutions defined on the discrete subsets and to address the question of whether or not
they get closer to the exact solutions as the discretization becomes finer – a property
desired of discretization.
the spatial derivatives in (1.1) by discrete operators. Let the spatial step-size h > 0
and M , M̂ be such that M h = L, M̂ h = ⌧ˆ and M, M̂ 2 N. Then, we define xj = jh,
for j = 0, ±1, ±2, .., ±M̃ , where M̃ = M + M̂ . Henceforth, we also use the notation
M0 = M 1 and n = 2M 1.
whose solution (t) 2 Rn depends on h and (as it will be shown in the next chapters)
converges to (u(x M 0 , t), ..., u(xM 0 , t)), as h ! 0. Note that n ! 1, as h ! 0, and
that the dimension of the system (2.1) increases as the discretization becomes finer.
t (⌧ ) = (t + ⌧ ),
2
Fi (t, z, !) = f (xi , t, Li,t !, i,t z, i,t z), (2.2)
xk+1 s t s xk t
[Li,t !](s, ⌧ ) = !k+i (⌧ ) + !k+1+i (⌧ ),
h h
2
where g is defined in (1.3). The discrete operators i,t and i,t in (2.2) are defined for
t 2 [0, T ], i = 0, ±1, ..., ±M 0 , and z = (z M 0 , ..., zM 0 ) 2 Rn , by
t
zi+1 zit 1
i,t z = , (2.3)
2h
t
2 zi+1 2zit + zit 1
i,t z = , (2.4)
h2
8
>
< g(xi , t), for i = ±M,
zit =
>
: zi , for i = 0, ±1, . . . , ±(M 1).
As it will be shown in Chapter 4, the operators (2.3) and (2.4) approximate the first
and second order derivatives (respectively) at the point xi .
for t 2 [ ⌧0 , 0], where u0 : [ L̂, L̂] ⇥ [ ⌧0 , 0] ! R is the initial function given in the
original problem. The goal of this thesis is to work through the proof techniques of
[37] to help address the question of whether or not the components of the solution
9
CHAPTER 3
FUNCTIONAL SYSTEMS
In this chapter, we construct iterative procedures for solving the general problem
(2.1) and thus (1.1)–(1.3). The process is summarised in the form of the following
algorithm. Let (0)
: [ ⌧0 , T ] ! Rn be an arbitrary function. We define the sequence
of vector functions (k)
: [ ⌧0 , T ] ! Rn , where k = 0, 1, 2, . . . , recursively, by
(k)
˙ (k+1) (t) = G(t, (k+1)
(t), (k)
(t), t ), t 2 [0, T ],
(3.1)
(k+1)
(t) = ũ0 (t), t 2 [ ⌧0 , 0].
The functions G are chosen according to the given F and are referred to as splitting
(0)
functions. The function is referred to as a starting function, and the functions
(k)
are referred to as the successive iterates.
for i = 1, 2, . . . , n, where Fi is defined, for example, by (2.2), then (3.1) generates the
following iterative process of the Picard type in the functional sense
11
⇣ ⌘
(k+1) (k) (k) (k+1) (k) (k) (k)
˙i (t) = Fi t, 1 (t), ..., i 1, i (t), i+1 (t), ..., n (t), t (3.2)
or if G is defined by
then (3.1) generates another iterative process of the Picard type in the functional
sense
⇣ ⌘
(k+1) (k+1) (k+1) (k+1) (k) (k) (k)
˙i (t) = Fi t, 1 (t), ..., i 1 (t), i (t), i+1 (t), ... n (t), t . (3.3)
indices of the successive iterates, and one may be preferable to the other depending
on the problem.
F : [0, T ] ⇥ Rn ⇥ C([0, T ], Rn ) ! Rn ,
and
U̇i (t) Fi (t, U (t), Ut ) n (t), (3.5)
for all t 2 [0, T ] and i = M 0 , . . . , M 0 . Moreover, suppose that u 2 C (4) (B, R) (class
13
of 4-times continuously di↵erentiable funtions from B to R) and that for the given
function
f : [ L, L] ⇥ [0, T ] ⇥ C(D, R) ⇥ R ⇥ R ! R,
For the iterative processes applied to (2.1), we assume that the functions
satisfy
G(t, r(t), r(t), rt ) = F (t, r(t), rt ), (3.7)
for all t 2 [0, T ], r 2 C([ ⌧0 , T ], Rn ) (note that for any t 2 [0, T ], rt 2 C([ ⌧0 , 0], Rn ))
and there exist continuous functions µ1 2 C([0, T ], R), µ2 , µ3 2 C([0, T ], R+ ) such that
k& &¯ "[G(t, &, z, !) G(t, &¯, z, !)]kn (1 "µ1 (t))k& &¯kn , (3.8)
CHAPTER 4
DIFFERENTIAL EQUATIONS
In this chapter, we present results that we will apply to derive error bounds for numer-
ical solutions to partial functional di↵erential equations. In the theorem below, the
notation C (4) (B, R) refers to the class of 4-times continuously di↵erentiable functions
from B to R.
Theorem 4.1 ([37], Lemma 3.1). If u 2 C (4) (B, R) and f satisfies condition (3.6),
then F defined by (2.2) satisfies condition (3.5) with
c
n (t) = 1 (t) + 2 (t) + 3 (t) , (4.1)
n2
Proof. Let i 2 {0, ±1, . . . , ±M 0 } and t 2 [0, T ] be arbitrary. From the definition of
U (t), equation (1.1), and definition (2.2), we have
16
@u
U̇i (t) Fi (t, U (t), Ut ) = (xi , t) Fi (t, U (t), Ut )
@t
⇣ @u @ 2u ⌘
= f xi , t, u(xi ,t) , (xi , t), 2 (xi , t) Fi t, U (t), Ut
@x @x
⇣ @u @ 2u ⌘
= f xi , t, u(xi ,t) , (xi , t), 2 (xi , t)
@x @x
2
f xi , t, Li,t Ut , i,t U (t), i,t U (t) .
Hence
@u
U̇i (t) Fi (t, U (t), Ut ) 1 (t) (xi , t) i,t U (t)
@x
@ 2u 2
+ 2 (t) (xi , t) i,t U (t) (4.2)
@x2
By Taylor’s Theorem applied to u(xi+1 , t) and u(xi 1 , t), since u 2 C (4) (B, R), we
have,
h @u h2 @ 2 u h3 @ 3 u
u(xi+1 , t) = u(xi , t) + · (xi , t) + · 2 (xi , t) + · (✓i , t), (4.3)
1! @x 2! @x 3! @x3
h @u h2 @ 2 u h3 @ 3 u
u(xi 1 , t) = u(xi , t) · (xi , t) + · (xi , t) · (⇠i , t), (4.4)
1! @x 2! @x2 3! @x3
✓ ◆
@u 1 @u h3 @ 3 u h3 @ 3 u
= (xi , t) 2h · (xi , t) + · 3 (✓i , t) + · (⇠i , t)
@x 2h @x 6 @x 6 @x3
h2 @ 3 u @ 3u
= (✓i , t) + 3 (⇠i , t) .
12 @x3 @x
Since u 2 C (4) (B, R), there exists a constant C > 0 such that
@ 3u
(x, t) C,
@x3
@u Ch2
(xi , t) i,t U (t) .
@x 6
2L
From this and n + 1 = , we get
h
@u C 4L2 2CL2
(xi , t) i,t U (t) · < . (4.5)
@x 6 (n + 1)2 3n2
and apply
18
h @u h2 @ 2 u h3 @ 3 u
u(xi+1 , t) = u(xi , t) + · (xi , t) + · 2 (xi , t) + · (xi , t)
1! @x 2! @x 3! @x3
h4 @ 4 u ˜
+ · (✓i , t),
4! @x4 (4.7)
h @u h2 @ 2 u h3 @ 3 u
u(xi 1 , t) = u(xi , t) · (xi , t) + · (xi , t) · (xi , t)
1! @x 2! @x2 3! @x3
h4 @ 4 u ˜
+ · (⇠i , t),
4! @x4
with ✓˜i 2 (xi , xi+1 ) and ⇠˜i 2 (xi 1 , xi ), respectively. From (4.6) and (4.7), we have
@ 2u 2
(xi , t) i,t U (t)
@x2
✓ ◆
@ 2u 1 2@ u 2
h4 @ 4 u ˜ h4 @ 4 u ˜
= (xi , t) h (x i , t) + ( ✓ i , t) + (⇠i , t)
@x2 h2 @x2 24 @x4 24 @x4
h2 @ 4 u ˜ @ 4u ˜
= ( ✓ i , t) + (⇠i , t) .
24 @x4 @x4
Since u 2 C (4) (B, R), there exists a constant C > 0 such that
@ 4u
(x, t) C,
@x4
2L
for all (x, t) 2 B. From this and h = , we get
n+1
@ 2u 2 Ch2 4L2 C L2 C
(xi , t) i,t U (t) = < . (4.8)
@x2 12 12(n + 1)2 3n2
(xk+1 s)
u(xi ,t) (s, ⌧ ) [Li,t Ut ](s, ⌧ ) = u(xi + s, t + ⌧ ) · (Ut )tk+i (⌧ )
h
(4.9)
(s xk )
· (Ut )tk+1+i (⌧ ) ,
h
for (s, ⌧ ) 2 D and k 2 N such that xk s xk+1 . Since (Ut )tj (⌧ ) = (Ut )j (⌧ ) =
Uj (t + ⌧ ) = u(xj , t + ⌧ ), for j = k + i and j = k + 1 + i, from (4.9), we get
(xk+1 s) (s xk )
= u(xi + s, t + ⌧ ) u(xk + xi , t + ⌧ ) u(xk+1 + xi , t + ⌧ )
h h
# "
2 2
(xi + xk xi s) @ u ˆ (s xk )
+ ( ✓i , t + ⌧ ) u(xi + s, t + ⌧ )
2! @x2 h
#
(xi + xk+1 xi s) @u (xi + xk+1 xi s)2 @ 2 u ˆ
+ (xi + s, t + ⌧ ) + (⇠i , t + ⌧ )
1! @x 2! @x2
" #
xk+1 s s xk
= u(xi + s, t + ⌧ ) 1
h h
" #
@u xk+1 s s xk
(xi + s, t + ⌧ ) (xk s) + (xk+1 s)
@x h h
@ 2u
(x, t) C,
@x2
2L
Therefore, since h = , we get
n+1
2CL2 2CL2
u(xi ,t) (s, ⌧ ) [Li,t Ut ](s, ⌧ ) < . (4.10)
(n + 1)2 n2
which shows that (3.5) holds with, for example, c = 2CL2 in (4.1), and finishes the
proof.
Corollary 4.1 ([37], Corollary 3.1). Suppose that there exists a constant d > 0 such
that
n o
max u(x, t) : x 2 [ L, L], t 2 [0, T ] d
21
where u is a solution of equation (1.1) with f defined by (1.4) for a class of functions
w 2 C(D, R) such that max{|w(s, ⌧ )| : (s, ⌧ ) 2 D} d. Then the function f satisfies
condition (3.6). Moreover, if u is of class C 4 (B, R), then F defined by (2.2) satisfies
condition (3.5).
Proof. In order for Theorem 4.1 to be applied, it suffices to show (3.6). Let x 2
[ L, L], t 2 [0, T ], w, w̄ 2 C(D, R), p, q, p̄, q̄ 2 R be arbitrary. From the definition of
f , we get
and
We now apply Theorem 4.1 and conclude that F defined by (2.2) satisfies (3.5) with
c
n (t) = (" + 1 + 2d),
n2
Corollary 4.2 ([37], Corollary 3.2). Let d > 0 be as in Corollary 4.1 with f defined
by (1.5). Then f satisfies condition (3.6). Moreover, if u is of class C 4 (B, R), then
F defined by (2.2) and (1.5) satisfies condition (3.5).
Proof. We apply Theorem 4.1 with f defined by (1.5). Let x 2 [ L, L], t 2 [0, T ],
ˆ 2 C(D, R), p, q, p̂, q̂ 2 R be arbitrary. Since a is a positive function, from (1.5),
!, !
we get
Z 0 Z ⌧ˆ
|f (x, t, !, p, q) f (x, t, !
ˆ , p̂, q̂)| = a(t)q + a(t) !(s, ⌧ )dsd⌧
⌧0 ⌧ˆ
Z 0 Z ⌧ˆ
a(t)q̂ a(t) !
ˆ (s, ⌧ )dsd⌧
⌧0 ⌧ˆ
Z 0 Z ⌧ˆ
a(t)|q q̂| + a(t) (!(s, ⌧ ) !
ˆ (s, ⌧ ))dsd⌧
⌧0 ⌧ˆ
Z 0 Z ⌧ˆ
a(t)|q q̂| + a(t) |!(s, ⌧ ) !
ˆ (s, ⌧ )| dsd⌧
⌧0 ⌧ˆ
Z 0 Z ⌧ˆ
a(t)|q q̂| + a(t) max |!(s, ⌧ ) !
ˆ (s, ⌧ )| dsd⌧
⌧0 ⌧ˆ (s,⌧ )2D
Z 0 Z ⌧ˆ
= a(t)|q q̂| + a(t) max |!(s, ⌧ ) !
ˆ (s, ⌧ )| dsd⌧
(s,⌧ )2D ⌧0 ⌧ˆ
c a(t)
n (t) = (1 + 2⌧0 ⌧ˆ)
n2
We have proved some preliminary results that will provide useful stepping stones in
deriving error bounds for numerical solutions to general partial functional di↵erential
equations in the next chapters.
24
CHAPTER 5
In this chapter, we will show that the iterations (3.2) of the Picard type in the
functional sense satisfy conditions (3.7)–(3.10), which will be useful in deriving error
bounds for the general numerical schemes (2.1) and (3.1). The proof for iterations of
the type (3.3) is similar.
Theorem 5.1 ([37], Lemma 3.2). Suppose that f satisfies condition (3.6) and
@f
(x, t, !, p, q) 0, (5.1)
@q
@f
⌫(t) (x, t, !, p, q) (5.2)
@q
✓ t
◆
zi+1 zit 1
t
zi+1 2&i + zit 1
Gi (t, &, z, !) = f xi , t, Li,t !, ,
2h h2
where t 2 [0, T ] and r 2 C([ ⌧0 , T ], R) are arbitrary. This shows that (3.7) holds.
In order to prove (3.8), we begin by using the definition of Gi and then we apply
the mean value theorem
⇣ zt zit t
zi+1 2&it + zit 1 ⌘
Gi (t, &, z, !) Gi (t, &¯, z, !) = f xi , t, Li,t rt , i+1 1
,
2h h2
⇣ t
zi+1 zit t
zi+1 &it + zit 1 ⌘
2¯
1
f xi , t, Li,t rt , ,
2h h2
2" @f
&i &¯i " Gi (t, &, z, !) Gi (t, &¯, z, !) = 1+ (Q) &i &¯i
h2 @q
2" @f
= 1+ 2 (Q) &i &¯i ,
h @q
where " 0. Upon taking the infinity norm on both sides of the above equation, we
deduce that
2" @f
& &¯ " G(t, &, z, !) G(t, &¯, z, !) = 1+ (Q) & &¯ n .
n h2 @q
2"⌫(t)
& &¯ " G(t, &, z, !) G(t, &¯, z, !) 1+ & &¯ n
n h2
with
2⌫(t)
µ1 (t) = ,
h2
⇣ zt zit t
zi+1 2&i + zit 1 ⌘
Gi (t, &, z, !) Gi (t, &, z̄, !) = f xi , t, Li,t !, i+1 1
,
2h h2
⇣ z̄ t z̄it t
z̄i+1 2&i + z̄it 1 ⌘
f xi , t, Li,t !, i+1 1
,
2h h2
27
t
zi+1 zit 1
t
z̄i+1 z̄it 1
t
zi+1 2&i + zit 1
t
z̄i+1 2&i + z̄it 1
1 (t) + 2 (t)
2h 2h h2 h2
1 (t) ⇣ t t
⌘ (t) ⇣
2
⌘
|zi+1 z̄i+1 | + |zit 1 z̄it 1 | + 2 |zi+1 t t
z̄i+1 | + |zit 1 z̄it 1 |
2h h
⇣ (t) (t) ⌘⇣ ⌘
1 2 t t
= + 2 |zi+1 z̄i+1 | + |zit 1 z̄it 1 |
2h h
where t 2 [0, T ], &, z, z̄ 2 Rn , ! 2 C([ ⌧0 , 0], Rn ). Since
1 (t) 2 (t)
+ 2 0,
2h h
upon taking the infinity norm (with i = 1, . . . , n) on both sides of the above inequality,
we get
!
1 (t) 2 (t)
kG(t, &, z, !) G(t, &, z̄, !)kn 2 + 2 kz z̄kn ,
2h h
!
1 (t) 2 (t)
µ2 (t) = 2 + 2 .
2h h
In order to prove (3.10), we apply (3.6) with 1 (t) = 2 (t) = 0 and obtain
⇣ zt zit t
zi+1 2&i + zit 1 ⌘
Gi (t, &, z, !) Gi (t, &, z, !
¯) = f xi , t, Li,t !, i+1 1
2h h2
⇣ t
zi+1 zit t
zi+1 2&i + zit 1 ⌘
1
f xi , t, Li,t !
¯,
2h h2
xk+1 s⇣ t t
⌘ s xk ⇣ t t
⌘
= !k+i (⌧ ) !
¯ k+i (⌧ ) + !k+i+1 (⌧ ) !
¯ k+i+1 (⌧ ) ,
h h
xk+1 s t t
Li,t !(s, ⌧ ) Li,t !
¯ (s, ⌧ ) !k+i (⌧ ) !¯ k+i (⌧ )
h
s xk t t
+ !k+i+1 (⌧ ) ! ¯ k+i+1 (⌧ )
h
xk+1 s
max k(! ! ¯ )(⌧ )k
h ⌧ 2[ ⌧0 ,0]
s xk
+ max k(! ! ¯ )(⌧ )k
h ⌧ 2[ ⌧0 ,0]
xk+1 s+s xk
= k! ¯ k0n = k!
! ¯ k0n
!
h
Upon taking the maximum over D on both sides of the above relations, we get
Therefore,
µ3 (t) = 3 (t),
Having proven that G satisfies the conditions (3.7)–(3.10) under the assumptions
29
of Theorem 5.1, we are now in a position to investigate the numerical schemes (2.1)
and (3.1) designed to find numerical solutions to general partial functional di↵erential
equations and to derive appropriate bounds on their errors, from which we can deduce
important convergence properties of the numerical algorithm.
30
CHAPTER 6
DIFFERENTIAL EQUATIONS
In this chapter, we prove a sequence of results that can be used to deduce appropriate
bounds on the errors that we can expect to get numerically by applying the numerical
schemes (2.1) and (3.1) to solve a class of general partial functional di↵erential
equations.
The first of these results is a main theorem specifying a bound on the norm of
the error in terms of an integral that holds as long as the right-hand-side function f
satisfies an appropriate condition. In particular, we require a sharper condition than
(5.1).
Theorem 6.1 ([37], Theorem 4.1). Suppose the given function f satisfies conditions
(3.6) and (5.1), the step-size h > 0 is chosen in such a way that
@f h @f
(x, t, !, p, q) (x, t, !, p, q) 0, (6.1)
@q 2 @p
@f @f
for all x 2 [ L, L], t 2 [0, T ], ! 2 C(D, R), p, q 2 R, and(x, t, !, ·), (x, t, !, ·)
@p @q
are continuous functions for each x 2 [ L, L], t 2 [0, T ], and ! 2 C([0, T ], R).
Moreover, suppose that F satisfies (2.2) and u 2 C (4) (D, R). Then the errors e(t)
31
satisfy !
Z t Z t
ke(t)kn n (s)exp 3 (⌧ )d⌧ ds,
0 s
Proof. We apply Theorem 6.7 from the Appendix with ⇢(xi , t) = ei (t) = Ui (t) i (t),
t 2 [ ⌧0 , T ] and i = 0, ±1, ..., ±M̃ , where vi (t) = g(xi , t), for i = ±M, . . . , ±M̃ .
@⇢
Then, (xi , t) = ėi (t) = U̇i (t) ˙ i (t). We want to show that
@t
Z Z !
t t
max |ei (t)| = max |⇢(xi , t)| n (s)exp 3 (⌧ )d⌧ ds.
i=0,±1,...,±M 0 i=0,±1,...,±M 0 0 s
Since
2
˙ i (t) = Fi (t, (t), t ) = f xi , t, Li,t t , i,t (t), i,t (t) ,
we get
@⇢
(xi , t) = U̇i (t) Fi (t, U (t), Ut )
@t
2 2
+ f xi , t, Li,t Ut , i,t U (t), i,t U (t) f xi , t, Li,t t , i,t U (t), i,t U (t)
2 2
+ f xi , t, Li,t t , i,t U (t), i,t U (t) f xi , t, Li,t t , i,t (t), i,t (t) .
Z 1 Z 1
@⇢ ˜⇢(xi , t) @f ˜(2) ⇢(xi , t) @f
(xi , t) (Qs )ds (Qs )ds
@t 0 @p 0 @q
2 2
+f xi , t, Li,t Ut , i,t U (t), i,t U (t) f xi , t, Li,t t , i,t U (t), i,t U (t)
and
32
Z 1 Z 1
@⇢ ˜⇢(xi , t) @f ˜(2) ⇢(xi , t) @f
(xi , t) (Qs )ds (Qs )ds
@t 0 @p 0 @q
2 2
+ f xi , t, Li,t Ut , i,t U (t), i,t U (t) f xi , t, Li,t t , i,t U (t), i,t U (t) ,
where ˜⇢(xi , t), ˜(2) ⇢(xi , t) are defined by (6.12) in the Appendix and
is a point from the domain of the function f . We now apply conditions (3.5) and
(3.6) to obtain
Z 1 Z 1
@⇢ ˜⇢(xi , t) @f ˜(2) @f
(xi , t) (Qs )ds ⇢(xi , t) (Qs )ds
@t 0 @p 0 @q
Since
xk+1 s⇣ ⌘ s xk ⇣ ⌘
= max (Ut )k+i (⌧ ) (vt )k+i (⌧ ) + (Ut )k+1+i (⌧ ) (vt )k+1+i (⌧ )
(s,⌧ )2D h h
xk+1 s s xk
= max (et )k+i (⌧ ) + (et )k+1+i (⌧ )
(s,⌧ )2D h h
xk+1 s s xk
= max ek+i (t + ⌧ ) + ek+1+i (t + ⌧ )
(s,⌧ )2D h h
xk+1 s s xk
= max ⇢(xk+i , t + ⌧ ) + ⇢(xk+1+i , t + ⌧ )
(s,⌧ )2D h h
33
xk+1 s s xk
= max ⇢(xi + xk , t + ⌧ ) + ⇢(xi + xk+1 , t + ⌧ )
(s,⌧ )2D h h
!
xk+1 s s xk
max + · max |⇢(xi + xj , t + ⌧ )|
(s,⌧ )2D h h j=k,k+1
Z 1 Z 1
@⇢ ˜⇢(xi , t) @f ˜(2) ⇢(xi , t) @f
(xi , t) (Qs )ds (Qs )ds
@t 0 @p 0 @q
n (t) + 3 (t)k⇢(xi ,t) khn .
Z1 Z1
@f @f
G(xi , t, ⇢(xi ,t) ) = (Qs )ds, H(xi , t, ⇢(xi ,t) ) = (Qs )ds,
@p @q
0 0
Z Z !
t t
ke(t)kn exp 3 (r)dr · n (s)ds
0 s
The next theorem will be applied to derive an error bound for iterative processes
applied to problem (1.1)–(1.3).
Theorem 6.2 ([37], Lemma 5.1). Suppose F and G satisfy conditions (3.5) and
(3.7)–(3.10). Then,
Z Z !
t t ⇣ ⌘
kE (k+1) (t)kn exp µ1 (s)ds µ2 (⌧ ) + µ3 (⌧ ) kE⌧(k) k0n + n (⌧ ) d⌧, (6.4)
0 ⌧
(k+1)
= U̇ (t) F (t, U (t), Ut ) + F (t, U (t), Ut ) G(t, (t), U (t), Ut )
(k+1) (k+1) (k)
+ G(t, (t), U (t), Ut ) G(t, (t), (t), Ut )
(k+1) (k) (k+1) (k) (k)
+ G(t, (t), (t), Ut ) G(t, (t), (t), t )
We now multiply both sides of the above relation by an arbitrary " < 0 and evaluate
the following infinity norm
35
(k+1)
U (t) (t) ( ")[G(t, U (t), U (t), Ut ) G(t, (k+1) (t), U (t), Ut )]
⇣ n
( ") U̇ (t) F (t, U (t), Ut ) + G(t, (k+1) (t), U (t), Ut ) G(t, (k+1) (t), (k)
(t), Ut )
⌘
(k)
+G(t, (k+1) (t), (k) (t), Ut ) G(t, (k+1) (t), (k) (t), t )
n
Since " > 0, we use the scaling property of the norm and further conclude that the
following inequality holds:
1 ⇣ (k+1) ⌘
E (t) + "Ė (k+1) (t) n E (k+1) (t) n µ1 (t) E (k+1) (t) n
"
+ U̇ (t) F (t, U (t), Ut ) + G(t, (k+1) (t), U (t), Ut ) G(t, (k+1) (t), (k)
(t), Ut )
(k+1) (k) (k+1) (k) (k)
+G(t, (t), (t), Ut ) G(t, (t), (t), t ) .
n
36
Notice that the right-hand side of the above inequality does not depend on " and by
taking " ! 0 on both sides, we deduce that
1 ⇣ (k+1) ⌘
D kE (k+1) (t)kn = lim E (t) + "Ė (k+1) (t) n
E (k+1) (t) n
"!0 "
µ1 (t) E (k+1) (t) n
(k+1) (k+1) (k)
+ U̇ (t) F (t, U (t), Ut ) n
+ G(t, (t), U (t), Ut ) G(t, (t), (t), Ut ) n
(k+1) (k) (k+1) (k) (k)
+ G(t, (t), (t), Ut ) G(t, (t), (t), t ) n,
where D is the left-hand side derivative with respect to t. We now apply conditions
(3.5), (3.9), (3.10) to get that
D kE (k+1) (t)kn
(k) 0
µ1 (t) E (k+1) (t) n
+ n (t) + µ2 (t) U (t) (k)
(t) n
+ µ3 (t) Ut t n
(k) 0
= µ1 (t) E (k+1) (t) n
+ n (t) + µ2 (t) E (k) (t) n
+ µ3 (t) Et n
(k)
µ1 (t) E (k+1) (t) n
+ µ2 (t) + µ3 (t) kEt k0n + n (t).
kE (k+1) (t) n
(t),
and
37
(k)
⇠(t) = µ2 (t) + µ3 (t) kEt k0n + n (t).
Notice that
Z t ⇣Z t ⌘
(t) = ⇠(s) exp µ1 (⌧ )d⌧ ds,
0 s
Z t ⇣Z t ⌘
(k+1)
kE (t) n
⇠(s) exp µ1 (⌧ )d⌧ ds,
0 s
In what follows, we will prove a sequence of preliminary results that will be useful
in proving Theorem 6.5.
Henceforth, we will use the following notation. Let t 2 [0, T ]. Then, the maximum
starting error will be denoted by
We assume that the function µ1 has no roots in [0, T ], that is, either sign(µ1 ) = 1 or
sign(µ1 ) = 1, and we define
µ2 (⌧ ) + µ3 (⌧ )
r(t) = sign(µ1 ) max ,
⌧ 2[0,t] |µ1 (⌧ )|
n (⌧ )
n (t) = sign(µ1 ) max .
⌧ 2[0,t] |µ1 (⌧ )|
For the sake of simplicity of the subsequent proofs, we also introduce the following
definitions of four functions:
38
Z t
A(t) = µ1 (⌧ )d⌧,
0
k 1 j
X A(t)
A(t)
↵k (t) = 1 e ,
j=0
j!
k 1
Sk (t) = r(t) ↵k (t),
k
X
k (t) = n (t) Si (t).
i=1
Theorem 6.3 ([37], Lemma 5.2). If µ1 has no roots in [0, T ], then all functions
k
sign(µ1 ) ↵k (t),
k
↵
˜ k (t) = ( sign µ1 ) ↵k (t)
˜ k0 (t)
and firstly show that ↵ 0, for all t 2 [0, T ]. From the definition of the function
A, we get
k 1 j 1 0 k 1 j
X ( 1)j A(t) A (t) X A(t)
↵k0 (t) = exp A(t) 0
exp A(t) A (t)
j=1
(j 1)! j=0
j!
k 2 j k 1 j
!
X ( 1)j+1 A(t) X ( 1)j A(t)
0
= exp(A(t))A (t) +
j=0
j! j=0
j!
k 1
0 ( 1)k A(t)
= exp A(t) A (t) .
(k 1)!
Since
k 1
k ( 1)k A(t)
˜ k0 (t)
↵ = sign(µ1 ) ↵k0 (t) k
= ( 1) exp A(t) A (t) 0
(k 1)!
k 1
A(t)
= exp A(t) µ1 (t) 0,
(k 1)!
↵
˜ k (t) is shown to be nondecreasing in the first case.
k 1
k ( 1)k A(t)
˜ k0 (t)
↵ = sign(µ1 ) ↵k0 (t) = ↵k0 (t) = exp A(t) µ1 (t)
(k 1)!
k 1
A(t)
= exp A(t) µ1 (t) 0
(k 1)!
showing that ↵
˜ k (t) is nondecreasing also in the second case.
Since A(0) = 0, ↵
˜ k (0) = ↵k (0) = 0 and since ↵
˜ k (t) is nondecreasing on [0, T ], we
conclude that ↵
˜ k (t) 0, for t 2 [0, T ], which finishes the proof.
We now apply Theorem 6.3 to prove the next theorem on the nonnegativity and
monotonicity of r(t)Sk (t) and n (t)Sk (t) for k = 1, 2, . . . .
Theorem 6.4 ([37], Corollary 5.1). If µ1 has no roots in [0, T ], then the functions
r(t)Sk (t) and n (t)Sk (t), with k = 1, 2, . . . , are nondecreasing and nonnegative for all
t 2 [0, T ].
Proof. Let r̃(t) = r(t)Sk (t). Then, from the definitions of r(t) and Sk (t), we get
40
⇣ ⌘
k 1
r̃(t) = r(t) · r(t) ↵k (t)
k
= r(t) ↵k (t)
!k
µ2 (⌧ ) + µ3 (⌧ )
= sign(µ1 ) max · ↵k (t)
⌧ 2[0,t] |µ1 (⌧ )|
!k
k µ2 (⌧ ) + µ3 (⌧ )
= sign(µ1 ) ↵k (t) · max .
⌧ 2[0,t] |µ1 (⌧ )|
We now define ˜ (t) = n (t)Sk (t). From the definitions of the functions n (t) and
Sk (t) we get
!
⇣ ⌘
˜ (t) = n (⌧ ) k 1
sign(µ1 ) max · r(t) ↵k (t)
⌧ 2[0,t] |µ1 (⌧ )|
!
k 1 n (⌧ )
= sign(µ1 ) r(t) ↵k (t) max
⌧ 2[0,t] |µ1 (⌧ )|
!k 1 !
µ2 (⌧ ) + µ3 (⌧ ) n (⌧ )
= sign(µ1 ) sign(µ1 ) max ↵k (t) max
[0,t] |µ1 (⌧ )| ⌧ 2[0,t] |µ1 (⌧ )|
!k 1 !
k µ 2 (⌧ ) + µ 3 (⌧ ) n (⌧ )
= sign(µ1 ) ↵k (t) · max · max .
[0,t] |µ1 (⌧ )| ⌧ 2[0,t] |µ1 (⌧ )|
(6.5)
!k 1
µ2 (⌧ ) + µ3 (⌧ )
max
[0,t] |µ1 (⌧ )|
and
n (⌧ )
max
⌧ 2[0,t] |µ1 (⌧ )|
The last result that is necessary in order to prove Theorem 6.5 can be summarised
by the following lemma.
Z t
exp A(⌧ ) ↵k (⌧ )µ1 (⌧ )d⌧ = exp A(t) ↵k+1 (t),
0
Z t Z t
exp A(⌧ ) µ1 (⌧ )↵k (⌧ )d⌧ = exp A(⌧ ) A0 (⌧ )↵k (⌧ )d⌧
0 0
Z t
d⇣ ⌘
= exp A(⌧ ) · ↵k (⌧ )d⌧
0 d⌧
" #⌧ =t Z t
= exp A(⌧ ) ↵k (⌧ ) + exp A(⌧ ) ↵k0 (⌧ )d⌧
0
⌧ =0
Z t
= exp A(t) ↵k (t) + exp A(0) ↵k (0) + exp A(⌧ ) ↵k0 (⌧ )d⌧
0
Z t
= exp A(t) ↵k (t) + exp A(⌧ ) ↵k0 (⌧ )d⌧.
0
Since
k 1
A(⌧ )
↵k0 (⌧ ) = exp A(⌧ ) A (⌧ )( 1) 0
, k
(k 1)!
we get
Z t
exp A(⌧ ) µ1 (⌧ )↵k (⌧ )d⌧
0
Z t k 1
0 k A(⌧ )
= exp A(t) ↵k (t) + A (⌧ )( 1) d⌧
0 (k 1)!
Z t
( 1)k k 1
= exp A(t) ↵k (t) + A0 (⌧ ) A(⌧ ) d⌧
(k 1)! 0
" #
k ⌧ =t
( 1)k A(⌧ )
= exp A(t) ↵k (t) +
(k 1)! k
⌧ =0
Z t k
A(t)
exp A(⌧ ) µ1 (⌧ )↵k (⌧ )d⌧ = exp( A(t))↵k (t) +
0 k!
k
!
exp A(t) A(t)
= exp A(t) ↵k (t)
k!
We now apply Lemma 6.1 and the previous two Theorems 6.3 and 6.4 to prove
the following theorem that supplies an explicit error bound for the successive iterates.
Theorem 6.5 ([37], Theorem 5.1). Suppose that the function F satisfies condition
(3.5) and G satisfies conditions (3.7)–(3.10). Moreover, suppose that the function µ1
has no roots in [0, T ]. Then,
E (k) (t) n
r(t)E(t)Sk (t) + k (t), (6.6)
Z Z !
t t ⇣ ⌘
(1)
kE (t)kn exp µ1 (s)ds µ2 (⌧ ) + µ3 (⌧ ) kE⌧(0) k0n + n (⌧ ) d⌧
0 ⌧
Z Z Z !
t t ⌧ ⇣ ⌘
exp µ1 (s)ds µ1 (s)ds µ2 (⌧ ) + µ3 (⌧ ) E(⌧ ) + n (⌧ ) d⌧.
0 0 0
Z t ⇣ ⌘
(1)
kE (t)kn exp A(t) A(⌧ ) µ2 (⌧ ) + µ3 (⌧ ) E(⌧ ) + n (⌧ ) d⌧
0
Z !
t
µ2 (⌧ ) + µ3 (⌧ ) n (⌧ )
= exp A(t) exp A(⌧ ) µ1 (⌧ ) E(⌧ ) + d⌧.
0 µ1 (⌧ ) µ1 (⌧ )
We now reduce the above integrand by considering its maximum and deduce that
Z !
t
(1) µ2 (s) + µ3 (s)
kE (t)kn exp A(t) exp A(⌧ ) µ1 (⌧ ) max E(⌧ )d⌧
0 s2[0,t] µ1 (s)
Z !
t
n (s)
+ exp A(t) exp A(⌧ ) µ1 (⌧ ) max d⌧.
0 s2[0,t] µ1 (s)
44
Therefore, since both maxima do not depend on ⌧ and the function E(t) is nonde-
creasing, we deduce that
! Z t
µ2 (s) + µ3 (s)
kE (1) (t)kn exp A(t) max E(t) exp A(⌧ ) µ1 (⌧ ) d⌧
s2[0,t] µ1 (s) 0
!Z
t
n (s)
+ exp A(t) max exp A(⌧ ) µ1 (⌧ ) d⌧,
s2[0,t] µ1 (s) 0
! !!
µ2 (s) + µ3 (s) n (s)
kE (1) (t)kn exp A(t) sign(µ1 ) max E(t) + max ·
s2[0,t] µ1 (s) s2[0,t] µ1 (s)
Z t
· exp A(⌧ ) µ1 (⌧ )d⌧.
0
⇣ ⌘Z t
(1)
kE (t)kn exp A(t) r(t)E(t) + n (t) exp A(⌧ ) µ1 (⌧ )d⌧.
0
Therefore, since
Z Z " #⌧ =t
t t
exp A(⌧ ) µ1 (⌧ )d⌧ = exp A(⌧ ) A0 (⌧ )d⌧ = exp A(⌧ )
0 0
⌧ =0
=1 exp A(t)
we find that
⇣ ⌘⇣ ⌘
(1)
kE (t)kn exp A(t) r(t)E(t) + n (t) 1 exp A(t)
(6.7)
⇣ ⌘⇣ ⌘
= r(t)E(t) + n (t) exp A(t) 1 .
45
On the other hand, notice that, for k = 1, the right-hand side of inequality (6.6) is
written in the form
⇣ ⌘
r(t)E(t)S1 (t) + 1 (t) = r(t)E(t)↵1 (t) + n (t)S1 (t) = ↵1 (t) r(t)E(t) + n (t) .
We now suppose that (6.6) is satisfied for a certain k > 1. From the definition of
the maximum norm k · k0n , we have
⇣ ⌘
kE⌧(k) k0n max r(⌧ + s)E(⌧ + s)Sk (⌧ + s) + k (⌧ + s) .
s2[ ⌧0 ,0]
Since, by Theorem 6.4, all functions n (t)Si (t), where i = 1, 2, . . . , are nondecreasing,
from the definition of the function k, we conclude that k is also nondecreasing.
Moreover, the function E(t) is nondecreasing and, by Theorem 6.4, the functions
r(t)Sk (t) have the same feature. Therefore, the function that is being maximized on
46
the right-hand side of the above inequality is also nondecreasing, which implies that
Z Z !
t t ⇣
kE (k+1) (t)kn exp µ1 (s)ds µ2 (⌧ ) + µ3 (⌧ ) r(⌧ )E(⌧ )Sk (⌧ )
0 ⌧
!
⌘
+ k (⌧ ) + n (⌧ ) d⌧.
From this and the definition of the function A(t), we further deduce that
Z
(k+1) A(t)
t
A(⌧ ) µ2 (⌧ ) + µ3 (⌧ ) ⇣ ⌘
kE (t)kn e e r(⌧ )E(⌧ )Sk (⌧ ) + k (⌧ )
0 µ1 (⌧ )
!
n (⌧ )
+ µ1 (⌧ ) d⌧.
µ1 (⌧ )
We now consider maxima of the two quotients in the above integrand and obtain that
!Z
µ (s) + µ (s) t ⇣ ⌘
(k+1) A(t) 2 3 A(⌧ )
kE (t)kn e max e r(⌧ )E(⌧ )Sk (⌧ ) + k (⌧ )
s2[0,t] µ1 (s) 0
!Z
t
n (s)
µ1 (⌧ ) d⌧ + eA(t) max e A(⌧ ) µ1 (⌧ ) d⌧.
s2[0,t] µ1 (s) 0
From this and the definitions of r(t) and n (t), we deduce that
Z t ⇣ ⌘
(k+1) A(t) A(⌧ )
kE (t)kn e r(t) e r(⌧ )E(⌧ )Sk (⌧ ) + k (⌧ ) µ1 (⌧ )d⌧
0
Z (6.8)
t
+ eA(t) n (t) e A(⌧ )
µ1 (⌧ )d⌧.
0
47
We now consider the first term, T1 , on the right-hand side of (6.8) and, from the
definitions of Si (t) and k (t), we obtain the following expression for T1
Z !k
t
µ2 (s) + µ3 (s)
T1 = eA(t) (sign µ1 )r(t) e A(⌧ ) ( sign µ1 )k ↵k (⌧ ) max
0 s2[0,⌧ ] |µ1 (s)|
Z k
! !i 1
t X n (s) µ2 (s) + µ3 (s)
A(⌧ )
E(⌧ )|µ1 (⌧ )|d⌧ + e max max
0 i=1
s2[0,⌧ ] |µ1 (s)| s2[0,⌧ ] |µ1 (s)|
!
( sign µ1 )i ↵i (⌧ )|µ1 (⌧ )|d⌧ .
By extending the above maxima from [0, ⌧ ] to [0, t] and interchanging the order of
summation and integration, we deduce that
!k Z t
A(t) µ2 (s) + µ3 (s) A(⌧ )
T1 e (sign µ1 )r(t) max E(t) e ( sign µ1 )k ↵k (⌧ )
s2[0,t] |µ1 (s)| 0
! k
!i 1 Z
n (s)
X µ2 (s) + µ3 (s) t
A(⌧ )
|µ1 (⌧ )|d⌧ + max max e ( sign µ1 )i
s2[0,t] |µ1 (s)| i=1
s2[0,t] |µ1 (s)| 0
!
↵i (⌧ )|µ1 (⌧ )|d⌧ .
!k
µ 2 (s) + µ 3 (s)
T1 eA(t) r(t) max E(t)( sign µ1 )k ( 1)e A(t) ↵k+1 (t)
s2[0,t] |µ1 (s)|
! k !i 1 !
n (s) X µ 2 (s) + µ 3 (s)
+ max max ( sign µ1 )i ( 1)e A(t) ↵i+1 (t) .
s2[0,t] |µ1 (s)| i=1 s2[0,t] |µ1 (s)|
k
X
k+1 i
T1 r(t) E(t)↵k+1 (t) + n (t) ( 1) r(t) ↵i+1 (t)
i=1
k
(6.9)
X
= r(t)Sk+1 (t)E(t) + n (t) Si+1 (t).
i=1
We now consider the second term, T2 , on the right-hand side of (6.8) and obtain
the following expression for it:
Z t h i⌧ =t
A(t) A(⌧ ) 0 A(t) A(⌧ )
T2 = e n (t) e A (⌧ )d⌧ = e n (t) e = n (t) eA(t) 1
0 ⌧ =0
k+1
X
(k+1)
kE (t)kn T1 + T2 r(t)Sk+1 (t)E(t) + n (t) Si (t) + n (t)S1 (t)
i=2
k+1
X
= r(t)Sk+1 (t)E(t) + n (t) Si (t) = r(t)Sk+1 (t)E(t) + k+1 (t),
i=1
Z t ⇣Z t ⌘
k (t) < n (s) exp 3 (⌧ )d⌧ ds,
0 s
49
Proof. Let !
Z t Z t
⌘(t) = n (s) exp 3 (⌧ )d⌧ ds,
0 s
for t 2 [0, T ]. Then, from the definition of n (t) and 3 (t), we deduce that
Z Z Z !
t t s
⌘(t) = c0 h2 (1 + %) 2 (s) exp %2 (⌧ )d⌧ %2 (⌧ )d⌧ ds
0 0 0
Z !Z Z !
t t s
= c0 h2 (1 + %) exp %2 (⌧ )d⌧ 2 (s) exp %2 (⌧ )d⌧ ds.
0 0 0
Since
Z Z ! " Z !#s=t
t s s
1
2 (s) exp %2 (⌧ )d⌧ ds = exp %2 (⌧ )d⌧
0 0 % 0
Z !!
s=0
t
1
= 1 exp %2 (⌧ )d⌧ ,
% 0
we further obtain
Z ! Z !!
t t
1
⌘(t) = c0 h2 (1 + %) exp %2 (⌧ )d⌧ 1 exp %2 (⌧ )d⌧
0 % 0
Z t ! !
1+%
= c0 h 2 exp %2 (⌧ )d⌧ 1
% 0
and !
Z t
d⌘
(t) = c0 h2 (1 + %)2 (t) exp %2 (⌧ )d⌧ .
dt 0
Let
µ2 (⌧ ) + µ3 (⌧ ) n (⌧ )
µ(t) = max , (t) = max .
⌧ 2[0,t] |µ1 (⌧ )| ⌧ 2[0,t] |µ1 (⌧ )|
Then, from the definition of k (t), n (t), Si (t), and r(t), we find that
50
k
X k
X i 1
k (t) = n (t) Si (t) = sign(µ1 ) (t) r(t) ↵i (t)
i=1 i=1
k
X i 1
= sign(µ1 ) (t) sign(µ1 )µ(t) ↵i (t)
i=1
k
X i 1 i
= (t) µ(t) sign(µ1 ) ↵i (t).
i=1
Since
n (⌧ ) µ2 (⌧ ) + µ3 (⌧ )
max = c0 h4 (1 + %) 1
, max = 1 + %h2 1
,
⌧ 2[0,t] |µ1 (⌧ )| ⌧ 2[0,t] |µ1 (⌧ )|
k
X
1 i 1 i
k (t) = c0 h4 (1 + %) 1
1 + %h2 sign(µ1 ) ↵i (t).
i=1
Z
|µ1 (t)| A(t) ⇣ t
i d↵i
⌘i 1
sign(µ1 ) (t) = e |µ1 (⌧ )|d⌧
dt (i 1)! 0
Z Z t
h 2 2 (t) ⇣ 2
t ⌘i 1 ⇣
2
⌘
= h 2 (⌧ )d⌧ exp h 2 (⌧ )d⌧
(i 1)! 0 0
we obtain
⇣ Z t ⌘
d k 2 2
(t) = c0 h (1 + %)2 (t) exp h 2 (⌧ )d⌧
dt 0
Z t
1 ⇣ ⌘i 1
Xk
2
· ( h + %) 2 (⌧ )d⌧
i=1
(i 1)! 0
and
51
⇣ Z t ⌘ ⇣ Z t ⌘
d k 2 2 2
(t) < c0 h (1 + %)2 (t) exp h 2 (⌧ )d⌧ exp ( h + %) 2 (⌧ )d⌧
dt
⇣ Z t 0
⌘ d⌘ 0
2
= c0 h (1 + %)2 (t) exp % 2 (⌧ )d⌧ = (t).
0 dt
(6.11)
Notice that k (0) = 0 and ⌘(0) = 0. Therefore, from (6.11), we conclude that
k (t) < ⌘(t), for all t 2 [0, T ] and k = 1, 2, . . . , which finishes the proof.
REFERENCES
[1] C.T.H. Baker, C.A.H. Paul, 1996, A global convergence theorem for a class
of parallel continuous explicit Runge-Kutta methods and vanishing lag delay
di↵erential equations, SIAM J. Numer. Anal., 33:1559–1576.
[2] C.T.H. Baker, 2014, Observations on evolutionary models with (or without) time
lag, and on problematical paradigms. Math. Comput. Simulation 96, 4–53.
[3] B. Basse, B.C. Baguley, E.S. Marshall, W.R. Joseph, B. van Brunt, G.C. Wake,
D.J.N Wall, 2003, A mathematical model for analysis of the cell cycle in cell lines
derived from human tumours, J. Math. Biol. 47:295–312.
[4] B. Basse, Z. Jackiewicz, B. Zubik-Kowal, 2009 Finite-di↵erence and pseudo-
spectral methods for the numerical simulations of in vitro human tumor cell
population kinetics. Math. Biosci. Eng. 6, no. 3, 561–572.
[5] A. Bellen, M. Zennaro, 2003, Numerical methods for delay di↵erential equations,
Numerical Mathematics and Scientific Computation, Oxford University Press.
[6] A. Bellen, S. Maset, M. Zennaro, N. Guglielmi, 2009, Recent trends in the
numerical solution of retarded functional di↵erential equations. Acta Numer.
18, 1–110.
[7] M. Bjorhus, 1994, On dynamic iteration for delay di↵erential equations, BIT 43,
325–336.
[8] M. Bjorhus, 1995, A note on the convergence of discretized dynamic iteration,
BIT 35, 291–296.
[9] H. Brunner, H. Xie, R. Zhang, 2011, Analysis of collocation solutions for a class
of functional equations with vanishing delays. IMA J. Numer. Anal. 31, no. 2,
698–718.
[10] H. Brunner, C. Ou, 2015, On the asymptotic stability of Volterra functional
equations with vanishing delays. Commun. Pure Appl. Anal. 14, no. 2, 397–406.
[11] S. Brzychczy, 1987, Existence of solutions for non-linear systems of di↵erential-
functional equations of parabolic type in an arbitrary domain, Ann. Polon. Math.
47, 309–317.
53
[13] S.A. Gourley, Y. Kuang, 2005, A delay reaction-di↵usion model of the spread of
bacteriophage infection, SIAM J. Appl. Math. 65:550–566.
[14] S.A. Gourley, S. Ruan, 2012, A delay equation model for oviposition habitat
selection by mosquitoes. J. Math. Biol. 65, no. 6-7, 1125–1148.
[17] K.J. in ’t Hout, M.N. Spijker, 1991, Stability analysis of numerical methods for
delay di↵erential equations. Numer. Math. 59, no. 8, 807–814.
[18] K.J. in ’t Hout, 1997, Stability analysis of Runge-Kutta methods for systems of
delay di↵erential equations. IMA J. Numer. Anal. 17, no. 1, 17–27.
[19] K.J. in ’t Hout, B. Zubik-Kowal, 2004, The stability of Radau IIA collocation
processes for delay di↵erential equations. Math. Comput. Modelling 40 no. 11-12,
1297–1308.
[25] S. Maset, M. Zennaro, 2014, Good behavior with respect to the sti↵ness in
the numerical integration of retarded functional di↵erential equations. SIAM J.
Numer. Anal. 52, no. 4, 1843–1866.
[27] L.F. Shampine, 2005, Solving ODEs and DDEs with residual control, Appl.
Numer. Math., 52:113–127.
[29] J. Szarski, 1976, Uniqueness of the solution to a mixed problem for parabolic
functional-di↵erential equations in arbitrary domains, Bull. Acad. Polon. Sci.
Ser. Sci. Math. Astron. Phys. 24, 814–849.
[30] S. Thompson, L.F Shampine, 2006, A friendly fortran 90 DDE solver, Appl.
Numer. Math., 56:503–516.
[32] J. Wu, 1996, Theory and Applications of Partial Functional Di↵erential Equa-
tions, Springer, New York.
[37] B. Zubik-Kowal, 2004, Error bounds for spatial discretization and waveform
relaxation applied to parabolic functional di↵erential equations, Journal of Math-
ematical Analysis and Applications, 293, 496–510.
[38] B. Zubik-Kowal, 2014, Numerical algorithm for the growth of human tumor cells
and their responses to therapy, Appl. Math. Comput. 230, 174–179.
55
APPENDIX
In this thesis, we apply the following one-dimensional version of [34, Theorem 1].
For this application, we use the symbol Fc ({xi : i = 0, ±1, . . . , ±M̃ } ⇥ [ ⌧0 , 0], R)
to denote a class of functions continuously di↵erentiable with respect to the second
argument. We also use similar notation for functions on similar domains.
h
G(P ) H(P ),
2
di↵erentiable with respect to the second input and satisfies the inequality
⇣ ⌘
˜⇢(xi , t) = 1 ⇢(xi+1 , t) ⇢(xi 1 , t) ,
2h
⇣ ⌘
˜(2) ⇢(xi , t) = 1 ⇢(xi+1 , t) 2⇢(xi , t) + ⇢(xi 1 , t) ,
h2
k⇢(xi ,t) khn = max{|⇢(xi + xj , t + ⌧ )| : j = 0, ±1, . . . , ±M̂ , ⌧ 2 [ ⌧0 , 0]}.
(6.12)
Then,
|⇢(xi , t)| !(t),
0
for all t 2 [0, T ], i = 0, ±1, . . . , ±M .
For clarity, we omit the proof, which is too technical for the purpose of this thesis.
We refer the reader to [34], where the proof is presented for the multi-dimensional
case.