Trends in Abstract and Applied Analysis - John R Graef - Johnny Henderson - Lingju Kong and Xueyan Sher
Trends in Abstract and Applied Analysis - John R Graef - Johnny Henderson - Lingju Kong and Xueyan Sher
Trends in Abstract and Applied Analysis - John R Graef - Johnny Henderson - Lingju Kong and Xueyan Sher
ISSN: 2424-8746
John R. Graef
Series Editor:
The University of Tennessee at Chattanooga, USA
This series will provide state of the art results and applications on current
topics in the broad area of Mathematical Analysis. Of a more focused
nature than what is usually found in standard textbooks, these volumes will
provide researchers and graduate students a path to the research frontiers in
an easily accessible manner. In addition to being useful for individual study,
they will also be appropriate for use in graduate and advanced
undergraduate courses and research seminars. The volumes in this series
will not only be of interest to mathematicians but also to scientists in other
areas. For more information, please go to
http://www.worldscientific.com/series/taaa
Published
John R Graef
Johnny Henderson
Lingju Kong
A catalogue record for this book is available from the British Library.
http://www.worldscientific.com/worldscibooks/10.1142/10888#t=suppl
Email: [email protected]
Printed in Singapore
Dedication
John Graef dedicates this work to his wife Frances and his Ph.D. advisor T.
A. Burton.
Stability theory of first order and vector linear systems are considered. The
relationships between stability of solutions, uniform stability, asymptotic
stability, uniformly asymptotic stability, and strong stability are discussed
and illustrated with examples. A section on the stability of vector linear
systems is included. The book concludes with a chapter on perturbed
systems of ODEs.
Preface
7. Stability Theory
7.1 Stability of First Order Systems
7.2 Stability of Vector Linear Systems
Bibliography
Index
Chapter 1
Systems of Differential Equations
1.1Introduction
image
image
(1)φ ∈ C(1)(I),
image
We now shall begin our work towards the local existence theorems.
image
image
which yields
image
that is,
image
image
Definition 1.2 Let V be a vector space. Then a norm ‖·‖ : V → ℝ has the
properties:
image
image
and denote
image
image
image
image
image
Note: As one goes from one rectangle to another, the constant KQ varies,
i.e., KQ1 is not necessarily equal to KQ2. See Figure 1.1.
Proof. Let fx(t, x) denote the Jacobian matrix of f(t, x); i.e.,
image
image
For the claim, let (t, x), (t, y) ∈ Q and set z(s) = (1 – s)x + sy, 0 ≤ s ≤ 1. We
will first show that (t, z(s)) ∈ Q, for all 0 ≤ s ≤ 1. Consider
image
(One might note here that we have shown Q to be a convex set wrt x.)
Above where we have z(s) = (1 – s)x + sy, we mean of course that z1(s) = (1
– s)x1 + sy1, z2(s) = (1 – s)x2 + sy2, …, zn(s) = (1 – s)xn + syn, since x, y ∈
ℝn.
image
Hence,
image
which implies
image
But z(1) = y and z(0) = x, so f(t, y) – f(t, x) = image. [This is the Mean
Value Theorem for vector-valued functions.]
Finally, we have
image
image
Consider the IVP, x′ = t2 + x2, x(0) = 0, and let a, b > 0 be fixed. Let the
rectangle Q = {(t, x) | |t| ≤ a, |x| ≤ b} ⊆ D = the (t, x)-plane. See Figure 1.2.
image
For this example, we can determine the best possible (maximum) α. For
fixed a, we find that the maximum of image as a function of b happens
when b = a, so that max image. Thus α = min image.
To find the maximum α, consider the decreasing function image and the
increasing function s = a. The maximum α occurs when image, or
image, we have image. See Figure 1.3.
image
image
We shall first prove that this sequence is well-defined by showing that (t,
xn(t)) ∈ Q, for |t – t0| ≤ α and n ≥ 0. Clearly (t, x0(t)) = (t, x0) ∈ Q, for |t –
t0| ≤ α ≤ a. We proceed by induction.
Assume that (t, xn(t)) ∈ Q, for |t – t0| ≤ α and 0 ≤ n ≤ k – 1. Consider
image
Then
image
image
So, (t, xk (t)) ∈ Q, for |t – t0| ≤ α. Therefore, by induction, we have (t, xn(t))
∈ Q, for |t – t0 | ≤ α and n ≥ 0.
image
image
image
image
image
image
Now
image
and so
image
Example 1.2 Let us compute this error for the IVP: x′ = t2 + x2, x(0) = 0.
image
image
from the maximizing process for α. Also, image. Thus, for any n = 0,1,2,
…,
image
Exercise 3. For each of the following, determine the best possible α, the
corresponding Lipschitz coefficient K, and calculate the first 3 Picard
iterates.
(3) image
image
Then,
image
Proof. Case I: Let t ∈ I with t > t0 be fixed. Then by (1.4), image, for all
image then ψ is differentiable and ψ′(u) = K(u)f(u).
image
which is
image
or
image
image
or
image
Now integrate over [t0, t] and recall both sides are nonnegative, and we
have
image
which gives
image
so that
image
Therefore, image, for all t ∊ I such that t > t0.
Corollary 1.2 If the hypotheses of Theorem 1.2 are satisfied, but g(t) ≡ 0
on I, then f ≡ 0 on I.
image
For such a solution ψ (t), we claim that ‖ψ(t) – x0 ‖ ≤ b for all t ∈[t0 – α, t0
+ α ]∩ J.
Assume this is not true, i.e., (t, ψ (t)) ∉ Q for some t ∈[t0 – α, t0 + α] ∩ J.
Then ‖ψ(t) – x0}‖ > b for some t ∈ [t0 – α, t0 + α] ∩ J. However, x0 = ψ
(t0), and hence by continuity, there exists τ ∈ (t0 – α, t0 + α) ∩ J such that
‖ψ(τ) – x0‖ = b and ‖ψ(t) – x0‖ < b on [t0, τ) or on (τ, t0].
image
image
image
which yields
image
By Corollary 1.2 to Theorem 1.2, it follows that ‖φ(t) – ψ (t)‖ = 0, that is,
φ(t) = ψ(t) for all t ∈ [t0 – α, t0 + α] ∩ J.
Proof. Let a fixed IVP as specified be given and let τ ∈ I be fixed, but
arbitrary. Choose a compact interval [a, b] ⊆ I such that t0, τ ∈ [a, b].
image
Then,
image
image
image
image
image
The hypotheses of Theorem 1.3 are satisfied, thus the theorem can be
applied.
Consider
image
where f and x are real-valued functions. Define
image
or
image
image
image
image
image
image
Referring to Example 1.3, for any t0 ∈ I and any ci, 1 ≤ i ≤ n, the IVP
image
image
Exercise 5. For the following second order scalar equations, show that
Theorem 1.3 applies to yield unique C(2) solutions on ℝ, and calculate their
solutions.
(1) image
(2) image
In the Picard existence theorem, the fact that f(t, x) was locally Lipschitz
was instrumented in establishing the uniqueness as well as the existence of
the solution φ.
image
Define xn(t) by
image
where
image
image
The proof proceeds by induction on j. Thus, assume now that ‖xn(t) – x0‖ ≤
b for t0 ≤ t ≤ tj and consider t ∈ [tj, tj+1].
Well,
image
Therefore, Claim 1 is true for t0 ≤ t ≤ tj+1 and hence the claim follows by
induction. ☐
Proof Claim 2. (a) Assume tj ≤ τ ≤ t ≤ tj+1, for some j. From (1.9), we have
So,
Then
But by (1.9),
Therefore,
which yields
But limnk→∞xnk (t) = φ (t) uniformly, which implies
So,
Fig 1.8 K ⊆ H ⊆ D.
Note: If , then .
Before our final theorem in this section, we will look at some examples of
Picard iterates of functions not satisfying Lipschitz conditions. In the first
case, it will be true that the Picard iterates converge to desired solutions,
whereas, in the second case the Picard iterates will not converge.
Note that in the proof of the Picard Theorem, one doesn’t have to set x0(t) =
x0. The only real restriction is that {(t, x0(t))} ⊆ Q.
Hence, again consider the above IVP: . We can define x0
(t) = tα, α > 0 fixed, on [0, ∞):
Example 1.7 For this example, we consider a problem which does have a
unique solution, however there is no Lipschitz condition — moreover, the
Picard iterates do not converge. Consider
All the pieces are continuous except possibly at t = 0. The pieces fit
together at each break for x. Now near t = 0, limt → 0 f(t, x) = 0 = f(0, x) in
each region implies that f is continuous at t = 0 also.
image
Fig. 1.9 The function f(t, x).
image
Suppose, moreover, that x(t) and y(t) are both solutions of the IVP on [t0, t0
+ α].
image
image
image
If h′(t) ≤ 0 on [t0, t0 + α], it follows that h(t) ≡ 0 on [t0, t0 + α]. Thus, x(t) ≡
y(t), if h′(t) ≤ 0 on [t0, t0 + α].
Theorem 1.5. Let f(t, x) be continuous on the set Q = {(t, x) |t0 ≤ t ≤ t0 + α,
‖x – x0‖ ≤ b. Assume that, for any (t, x1), (t, x2) ∈ Q,
image
Then, if x(t) and y(t) are solutions of the IVP x′ = f(t, x), x(t0) = x0, each
with its graph contained in Q for t0 ≤ t ≤ t0 + α, it follows that x(t) ≡ y(t) on
[t0, t0 + α ].
Proof. Since {(t, x(t))}, {(t, y(t))} ⊆ Q and since x(t) and y(t) are solutions
of the IVP, we have
image
Note: When x′ = f(t, x) is a scalar equation, (i.e., f(t, x): ℝ × ℝ → ℝ), the
condition given in the theorem becomes (x1 – x2)(f(t, x1) – f(t, x2)) ≤ 0
which is equivalent to saying that f(t, x) is nonincreasing in x, for all fixed t.
☐
Exercise 9. In the preceding Example 1.7, find the unique solution of the
IVP on [0, 1]. Tell why the solution is unique.
Note: Theorem 2.1 says the graph of x(t) depicted in Figure 2.1 cannot
oscillate in and out of the box for t near b, but must be squeezed into the
box; see Figure 2.1.
Proof. We first show that graph of the solution x(t) stays in the box for t
near b. Assume the hypotheses of the theorem are satisfied, but that there
are values of t arbitrarily close to b such that ‖x(t) − x0‖ > β. Choose m
large enough such that 0 < b − tm < α and (we can do this
since tn ↑ b and x(tn) → x0 as n → ∞), and such that . By
continuity, there exists tm < τ < b such that ‖x(τ) − x0‖ = β and ‖x(t) − x0‖ <
β, for tm ≤ t < τ.
Fig. 2.1 x(t) cannot oscillate in and out of the box for t near b.
Now
So,
we have
Consequently, by the Cauchy Criterion limt→b−x(t) exists. (The Cauchy
Criterion is as follows: limt→b−x(t) exists if and only if for all ε > 0, there
exists a δ > 0 such that ‖x(t1) − x(t2)‖ < ε, for any t1, t2 ∈ (b − δ, b).) But
x(t) is continuous and hence limt→b− x(t) = x0.
Thus, we extend x(t) to (a, b] by defining x(b) := x0. Then x(t) is
continuous on (a, b]. Moreover, if f(t, x) can be defined at (b, x0) so as to be
continuous on D ∪ {(b, x0)}, then
Therefore, x(t) has a left-hand derivative at t = b which has the value f(b,
x0). This is a continuation of x(t) to (a, b]. ☐
Note: If x(t) is a solution on an infinite interval, say (a, +∞), then x(t) →
∂ D as t → ∞.
Theorem 2.2 (Continuation Theorem). Let f(t, x) be continuous on an
open set D ⊆ ℝ × ℝn and let x(t) be a solution of x′ = f (t, x) on an interval
I. Then x(t) can be continued to be defined on a maximal interval (α, ω) and
x(t) → ∂D, as t → α and as t → ω.
Proof. We will prove that x(t) can be continued to be defined on a right
maximal interval. Then that extension can be continued to the left to be
defined on a left maximal interval.
Let be a sequence of nonnull open subsets of D such that are
compact, , for all n ≥ 1, and .
Exercise 11. Describe how such a sequence of sets can be constructed.
Let b be the right end point of I. If b = +∞, then I is already right
maximal. Then as noted above, x(t) → ∂D, as t → ∞.
Second, assume that b < ∞ and I is open at b. Then there are two cases:
(1) for all m ≥ 1, there exists τm ∈ I such that , for τm < t < b,
or
(2) there exists a sequence tn ↑ b and a set Km such that , for
all n ≥ 1,
Case 1: We claim if Case 2 holds, then I is already right maximal. So
assume Case 2, but suppose I is not right maximal. Then, there exists a
proper extension to the right. In particular, if τ ∈ I, then x(t) is a solution on
[τ, b]. Now {(t, x(t))| τ ≤ t ≤ b}(compact) ⊆∪kn which is an open covering.
Hence there exists a finite subcovering, so by construction, there exists m0
such that , which contradicts to our assumption in Case 2. Thus, in this case,
I is right maximal. We note that x(t) → ∂D as t → b is also satisfied.
Case 2: Assume that Case 1 holds. Then there exists a sequence tn ↑ b
and m ≥ 1 such that for any n ≥ 1. Since is compact, there exists a
subsequence , for some . By Theorem 2.1, there exists an extension of x(t)
to I ∪ {b}.
[From this point on, we will also resolve the case where the interval I is
closed at b.]
Now we have . By Corollary 1.3 of Theorem 1.4, there exists a δm > 0
such that x(t) can be extended to the interval [b, b + δm]. If , again by
Corollary 1.3 of Theorem 1.4, we can extend x(t) to [b + δm, b + 2δm].
Continue in this manner. But is compact, hence there exists a bound on the
number of times this “extending” can be done; i.e., there exists jm ≥ 1 such
that , but all such previously constructed coordinates belong to .
Let b1 = b + jmδm. It is true that (b1, x(b1)) ∈ D. Thus, since D = , there
exists m1 > m such that (b1, x(b1)) ∈ . By the same corollary, there exists a
δm1 > 0 such that x(t) can be extended to [b1, b1 + δm1]. Repeat the argument
above. So, we can say there exists an integer jm1 ≥ 1 such that , but .
As above, let b2 = b1 + jm1 δm1. Continuing the pattern, we obtain an
infinite sequence b < b1 < b2 < b3 < …, and an infinite sequence of integers,
m < m1 < m2 < …, such that x(t) is extended to the closed interval [b, bj] for
any j ≥ 1. Now
Corollary 2.1 Let f(t, x) be continuous on [a, b] × ℝn. Then for any x0 ∈
ℝn, the IVP x′ = f(t, x), x(a)= x0 has a solution x(t) which can be continued
to be defined on a maximal interval which will be either
Assume that t0 < t1 < t0 + α (i.e., assume the graph of (t, x(t)) hits ∂Q
before t reaches t0 + α) (Note: t1 − t0 < α). Then ‖x(t1) − x0‖ = b Since Q is
closed, (t, x(t)) ∈ Q, t0 ≤ t ≤ t1. See Figure 2.6. Therefore, ‖f(t, x(t))‖ ≤ M,
for t0 ≤ t ≤ t1. Now from ds, we have
which is a contradiction. Hence, our assumption that t0 < t1 < t0 + α is false,
which implies that t1 ≥ t0 + α, and consequently x(t) extends to [t0, t0 + α].
Since x(t) was an arbitrary solution of the IVP, it follows that every
solution extends to [t0, t0 + α]. Similar arguments can be made for
extending to [t0 − α, t0]. ☐
Remark 2.1 It is also the case that φ(t) = sup{x(t) | x(t) is a solution of
(2.2)} and ψ (t) = inf {x(t) | x(t) is a solution of (2.2) are solutions of (2.2)
on [t0 − α, t0 + α].
Assume xn(t) is defined on (αn, ωn). Further assume limn → ∞ (tn, yn) = (t0,
y0) ∈ D. Then there exists a noncontinuable solution x0(t) of
with interval of existence (α0, ω0), and there exists a subsequence of the
sequence {xn(t)} such that, for each compact [a, b] ⊂ (α0, ω0), uniformly
on [a, b], in the sense that, there is an N[a, b] ∈ ℕ such that for nk ≥ N[a, b],
the compact and uniformly on [a, b].
Exercise 15. Prove that, from the last statement of the theorem, and ; in
particular .
Proof the theorem. We will prove that there is a solution x0(t) of (2.3)0 on
a right maximal interval of existence [t0, ω0) and a subsequence of the
sequence {xn(t)} such that for any τ with t0 < τ < ω0, there is a N[t0, τ] such
that , for nk ≥ N[t0, τ] and uniformly on [t0, τ]. See Figure 2.7. Having done
this, we return to the original statement of Theorem 2.3 and replace the
original sequence {xn(t)} by the subsequence then carry out an analogous
procedure on a subsequence to get a limit solution on a left maximal
interval.
Proceeding with the proof, let be a sequence of nonnull open sets such
that are compact, , and ∪ Kn = D. See Figure 2.8.
For each j ≥ 1, let ρj = dist( , complement of Kj + 1) > 0. Let
. Then is compact and . Now as n→ ∞, the
sequence fn(t, x) → f0 (t, x) on uniformly, which implies that there exists Mj
> 0 such that ‖fn (t, x)‖ ≤ Mj on for all n ≥ 0. It follows that, for any point
and any n ≥ 0, the IVP
has all of its solutions defined on [s − δj, s + δj], where by Corollary 2.2.
(Note: The calculation of δj is as follows: Take and . So, and then set δj =
αj as has been done previously in Corollary 2.2.)
Also, all solutions are uniformly bounded on [s − δj, s + δj], since ‖x(t)‖
≤ , where .
Furthermore, all solutions satisfy a Lipschitz condition, since for a
typical solution and , where τ, t ∈ [s − δj, s + δj]. Hence,
holds for all t, τ ∈ [s − δj, s + δj]; that is, every solution to each IVP (2.3)j
satisfies the same Lipschitz condition.
Now assume (t0, y0) ∈ Km1. Then there is an N1 such that, for all n ≥ N1,
(tn, yn) ∈ Km1 and , since (tn, yn) → (t0, y0). See Figure 2.9.
Fig. 2.9 The points t0 and tn.
At this stage we have, for n ≥ N1, [t0, t0 + ε1] ⊆ [tn − δm1, tn + δm1] ⊂ (αn,
ωn), because all solutions of the IVP exist on [tn − δm1, tn + δm1].
Furthermore, is uniformly bounded and equicontinuous on [t0, t0 + ε1].
Therefore, by the Arzelà-Ascoli Theorem, there is a subsequence of
integers such that the subsequence converges uniformly on [t0, t0 + ε1].
Call this limit on [t0, t0 + ε1] uniformly.
We claim that x0(t) is a solution of x′ = f0(t, x), x(t0) = y0 on [t0, t0 + ε1].
Let t ∈ [t0, t0 + ε1]. Then
since .
Let n1(k) → ∞. Then , and so,
This verifies the claim and so x0(t) is a solution of x′ = f0(t, x), x(t0) = y0 on
[t0, t0 + ε1].
Now, if (t0 + ε1, x0 (t0 + ε1)) ∈ Km1 (i.e., the point on the graph of x0(t) at
the end point of [t0, t0 + ε1]), then we can repeat this process, since (i.e., the
process we have gone through depends only on the fact that (t0 + ε1, x0(t0 +
ε1)) ∈ Km1). Hence, repeating the above process, we obtain a second
subsequence {n2(k)} ⊆ {n1(k)} such that uniformly on [t0 + ε1, t0 + 2ε1], and
consequently uniformly on [t0, t0 + 2ε1].
Continuing in this manner, we must reach a first integer j1 ≥ 1 such that
(t0 + j1ε1, x0 (t0 + j1ε1)) ∉ Km1, and we will have also obtained
corresponding subsequences {ni(k)}, 1 ≤ i ≤ j1, such that {n(i + 1)(k)} ⊆
{ni(k)}. Define , and assume that , for m2 (recall D = ). Then we start over
and repeat our construction as above.
Summarizing, we obtain sequences, , where the first point where you go
outside Km2, etc, and a sequence of sets Km1 ⊆ Km2 ⊆ Km3 ⊆ …, with a
subsequence of integers {n1(k)}, {n2(k)}, {n3(k)},….
Let .
We claim that x0(t) is a solution of x′ = f0 (t, x), x(t0) = y0 on [t0,ω0).
To see that this is a solution, let t0 < τ < ω0. Then by the definition of ω0,
there exists . By our construction, there is a subsequence
which converges uniformly to x0(t) on , hence, x0(t) is a
solution on and therefore is a solution on [t0, τ]. But τ was arbitrarily
selected, thus x0(t) is a solution on [t0, ω0). Consider now the array of
sequences:
Proof. Assume that (i) is false; that is, there is no N such that [a, b] ⊆ (αn,
ωn), for all n ≥ N. Then there exists a subsequence such that . In the Kamke
Theorem, replace the original sequence with this subsequence. Relative to
this subsequence, the hypotheses of the Kamke Theorem are satisfied. By
the Kamke Theorem, there exists a subsequence such that converges to a
solution of IVP (2.3)0 with the convergence being uniform on each compact
subinterval of the interval of existence for the solution of (2.3)0.
But x0(t) is the unique solution of (2.3)0, hence converges to x0(t)
uniformly each compact subset of (α0, ω0), but this is a contradiction. Thus
(i) holds.
Assume now that (ii) is false. Then, there exists ε0 > 0 and a subsequence
such that at some points t ∈ [a, b] and for all nk. Now replace the
sequence in the Kamke Theorem with this subsequence. By the Kamke
Theorem, it follows that there exists a subsequence which converges
uniformly to a solution, to x0(t) by uniqueness, of (2.3)0 on each compact
subinterval of (α0, ω0); in particular, for i sufficiently large, we have , for all
t ∈ [a, b], which is a contradiction.
Therefore, limn→∞ xn(t) = x0(t) uniformly on [a, b] for n ≥ N. ☐
Remark 2.3 Above, it may be the case that the IVP’s
do not have unique solutions. Yet, if x0(t) is unique, then any sequence of
solutions of (2.3)n converge to x0(t) uniformly on compact subsets of (α0,
ω0).
For our next consequence of the Kamke Theorem, we see that if
solutions of IVP’s are unique, then whenever initial conditions are
perturbed slightly, the resulting solutions stay uniformly near to each other.
Theorem 2.4 (Continuous Dependence of Solutions on Initial
Conditions) Let f(t, x) be continuous on an open set D ⊆ ℝ× ℝn (also
holds on slabs with the appropriate extensions), and assume that IVP’s for
x′ = f(t, x) on D have unique solutions. Given any (t0, x0) ∈ D, let x(t; t0, x0)
denote the solution of
with maximal interval (α(t0, x0), ω (t0, x0)). Then for each ε > 0 and for each
compact [a, b] ⊆ (α(t0, x0), ω(t0, x0)), there exists a δ > 0 such that for all
(t1, x1) ∈ D, |t0 − t1 | < δ and ‖x1 − x0‖ < δ imply that [a, b] ⊆ (α(t1, x1),
ω(t1, x1)), the maximal interval of existence of the solution x(t; t1, x1) of
and ‖x(t; t1, x1) − x(t; t0, x0)‖ < ε on [a, b].
Proof. Assume that there is a (t0, x0) ∈ D, such that [a, b] ⊆ (α(t0, x0), ω(t0,
x0)), and an ε > 0 such that no such δ exists. See Figure 2.11. Choose a
sequence {δn} ↓ 0 such that one or the other of the conclusions fail for each
δn. Hence, for each n, there exists (tn, xn) ∈ D with |t0 − tn| < δn, ‖x0 − xn‖ <
δn. Then, (tn, xn) → (t0, x0), since δn ↓ 0, with one or the other of the
conclusions failing for the corresponding solution x(t; tn, xn) of the IVP
But this violates the previous corollary, since this sequence {x(t; tn, xn}
satisfies the hypotheses of the Kamke Theorem, plus we have a unique
solution x(t; t0, x0).
Thus, our assumption is false and the conclusions must hold. ☐
Exercise 16. With the hypotheses and notations of Theorem 2.4, prove that
α(t, x) is upper semi-continuous and ω (t, x) is lower semi-continuous on D.
or
is unique. Then, for each (t0, x0, λ0) ∈ D and for each [a, b] ⊂ (α(t0, x0, λ0),
ω(t0, x0, λ0)) and for each ε > 0, there exists δ > 0 such that (t0, x0, λ1) ∈ D
and ‖λ1 − λ0‖ < δ imply [a, b] ⊂ (α (t0, x0, λ1), ω(t0, x0, λ1)) and that ‖x(t; t0,
x0, λ1) − x(t; t0, x0, λ 0)‖ < ε on [a, b].
Note: We are denoting the solution of the IVP x′ = f (t, x, λ0), x(t0) = x0
by x(t; t0, x0, λ0).
Proof. Set
i.e., z is an n + m vector with the first n components those of x and the last
m components those of λ.
Set
Apply Theorem 2.4 to z′ = h(t, z) (note that the solutions zi(t) are unique
since solutions to are unique), and the conclusions follow. ☐
Remark 2.5 Suppose that f(t, x) is continuous on a slab region of the form
[t0, ∞) × ℝn. Further assume that the solution of the IVP
is unique and exists on [t0, ∞). Then by our results concerning continuity of
solutions wrt the initial conditions, given [t0, t1] and given ε > 0, there exists
δ > 0 such that ‖x1 − x0‖ < δ implies all solutions of
extend to [t0, t1] and satisfy ‖x(t; t0, x1) − x(t; t0, x0)‖ < ε on [t0, t1].
Definition 2.4. The solution x(t; t0, x0) is said to be stable in case, for each ε
> 0, there exists δ > 0 such that, for each x1 ∈ ℝn with ‖x1 − x0‖ < δ, the
solution x(t; t0, x1) exists on [t0,∞) and satisfies ‖x(t; t0, x1) − x(t; t0, x0)‖ < ε
on [t0,∞); i.e., a strong continuity property wrt x0 is satisfied with t fixed at
t0.
The equation satisfies Lipschitz condition on slabs of the form [a, b] × ℝ2.
Let’s look at this equation on say [0, 2]. We can write the corresponding
integral equations
Suppose y(t) is a solution of the same differential equation with |y(0) − x(0)|
+ |y′(0) − x′(0)| < δ. Correspondingly, there are similar integral equations in
terms of y. Now, choose y such that y(0) ≠ c0, y′(0) ≠ c1. Subtract
corresponding integral equation expressions and thus obtain
Hence,
Set h(t) = |y(t) − x(t)| + |y′(t) − x′(t)|, and then by (2.4) and (2.5), we have
A special form of the Gronwall inequality states
Smooth Dependence on
Initial Conditions and Smooth
Dependence
on Parameters
and have maximal interval of existence (α, ω). Choose [t1, t2] ⊂ (α,ω) and
t0 ∊ [t1, t2 ]. Let A(t) be the Jacobian matrix, A(t) = fx(t, x(t)) (i.e., = fx (t,
x(t; t0, x0))). Then x (t; t0, x0) has continuous partial derivatives wrt the
components x0j of x0, for 1 ≤ j ≤ n, and wrt t0 on [t1, t2], hence on (α,ω).
Furthermore, inline is the solution of the IVP
where ej = (δ1j, δ2j,…,δnj)T with
Proof. We consider first inline. Let 1 ≤ j ≤ n be fixed and let xm (t) = x(t;
t0, x0 + δmej), where {δm} → 0 (i.e., xm (t0) = x0 + δmej). Let [t1, t2] be a
fixed compact subinterval of (α, ω). Let η > 0 be such that K ≡ {(s, z), | t1 ≤
t ≤ t2, |s − t| ≤ η and ‖z − x(t; t0, x0)‖ ≤ η} ⊆ D.
By the Kamke Theorem, there exists N1 such that for all m ≥ N1, [t1, t2]
⊂ (αm, ωm), the maximal interval of existence of xm(t), and the graph {(t,
xm(t)) | t1 ≤ t ≤ t2} ⊆ K, i.e., ‖xm(t) − x(t)‖ ≤ η, for t1 ≤ t ≤ t2, and inline
xm(t) = x(t) uniformly on [t1, t2].
Recall here the form of the Mean Value Theorem for vector valued
functions, which we have seen earlier in Chapter 1: f(t, y) – f(t, x) =
inline where fx denotes the Jacobian matrix of f.
So, for all m ≥ N1,
Since xm (t) → x(t) uniformly on [t1, t2] and from the uniform continuity of
fx on K, it follows that as m → ∞,
Now, let
Then, by (3.1),
Now, by construction,
Moreover, inline is continuous on [t1, t2] ℝn, for all m ≥ N1, and
converges uniformly to fx(t, x(t))y on each compact subset of [t1, t2] × ℝn.
By the Kamke Theorem, it follows that inline uniformly on the compact
subinterval [t1, t2], where z(t) is the solution of the IVP
Now let’s look at z(t) in detail. We see that zm(t) = inline is the type
of difference quotient we would consider in looking for a partial derivative
wrt the x0j component. Above, we showed limm→∞zm(t) existed (or limδm→ 0
of the difference quotient existed). Since this applies to any sequence {δm}
with δm → 0 as m → ∞, we can make the stronger statement that given [t1,
t2] ⊂ (α, ω) with t0 ∈ [t1, t2] and for each ε > 0, there exists δ > 0 such that,
for all h ∊ ℝ with |h| < δ, [t1, t2] is contained in the maximal interval of
existence of x(t; t0, x0 + hej) and
that is, inline But 1 ≤ j ≤ n was arbitrary, thus partial derivatives wrt the
components of the initial vector x0 exist.
For differentiability wrt t0, let δm → 0 and let xm(t) denote the solution
x(t; t0 + δm, x0). As before, we have
Hence,
Consequently,
and as above, zm(t0) → − f(t0, x0). Then proceeding as in the first part of the
proof with uniform convergence, etc., and with the sequence of initial
conditions: (t0, zm(t0)) → (t0, − f(t0, x0)), we obtain inline is the solution of
the IVP
i.e.,
Hence,
Hence,
Suppose now that f(t, x, λ) is continuous and has continuous first partials
wrt the components of x and λ on an open set D ⊆ ℝ× ℝn× ℝm. Consider
the IVP
and let x(t; t0, x0, λ0) denote the solution. We wish to discuss inline, for 1
≤ j ≤ m.
Change the IVP to the nonparametric situation by inline, so that
By Theorem 3.1, inline is the solution of the IVP y′ = hz (t, z(t; t0, z0))·y
satisfying
whose solutions are inline, and yn + j(t) ≡ 1. Thus, we have that inline is
the solution of the IVP
where
Then x(t; t0, x0, λ0) has partial derivatives wrt the components of λ0 on (α(t0,
x0, λ0), ω(t0, x0, λ0)) and inline is the solution of (3.2).
where
i.e., fx(t, x(t; t0, x0)) = A(t), for all solutions x(t) of the original system of
differential equations.
Definition 3.1. Given a solution x(t; t0, x0) of x′ = f(t, x), x(t0) = x0, the
differential equation y′ = fx(t, x(t; t0, x0))y is called the first variational
equation of x′ = f(t, x) wrt the solution x(t; t0, x0).
Remark 3.2. Consider the IVP for the nth order scalar equation,
that is,
where
For this first order system, we have
It follows from Theorem 3.1 that inline is the solution of the IVP
But z1(t) is the first component of inline. So, inline, and in particular
inline is the solution of the nth order linear differential equation in (3.3)
and satisfies the IC’s there.
Note: In (3.3), the coefficient of z(j) is just a continuous function of t.
Example 3.2. This example illustrates Remark 3.2. Let inline and
consider
Now inline are continuous and hence the IVP has a unique solution. By
inspection, our unique solution of the given nonlinear IVP is x(t; 0, c) = x(t;
0, 1, 0) ≡ 1. We wish to calculate .
From above, inline is the solution of the IVP
Hence
Exercise 17. A. Let λ be an m-vector and let x(t; t0, c, λ) be the solution of
the IVP
(So c = (c1, c2,…, cn)T.) Determine the IVP which has inline as a
solution.
B. Apply problem A to each of the following:
(a) x′′ + λx = 0, x(0) = 0, x′(0) = 1, λ0 = 4.
(b) (1−t2)x′′ − 2tx′ + λ(λ+1)x = 0, t0 = 0, λ = n; solution is Pn(t).
(c) t2x′′ + tx′+ (t2 − λ2)x = 0, t0 = 1, λ = 0; solution is J0(t).
C. In B(a), calculate inline by solving the IVP, then differentiating wrt λ.
D. Given the scalar IVP
is unique.
Theorem 3.3. Let φ (t, x) be continuous on an open set D ⊆ ℝ × ℝ and let
(t0, x0) ∊ D. Then the IVP
image
has a maximal solution xM(t) and a minimal solution xm(t).
some t2 with t0 ≤ t2 ≤ t1 such that y(t2) = unk0 (t2) and y(t) > unk0 (t) on (t2, t1].
See Figure 3.1. Thus
Therefore,
which is a contradiction. Thus, no such y(t) exists and it follows that x∗(t) is
a maximal solution on [t0, ω∗). A similar argument shows that x∗(t) is a
minimal solution on (α∗, t0].
Now, for each n ≥ 1, let ωn(t) be a solution of
For a function y(t), defined in some neighborhood of the point t0, the Dini
derivatives at the point t are defined as follows:
Proof. Assume the conclusion to be false. Then, there exists t1 > t0 with t1
∈ [t0, t0 + a]∩ (α, ω) such that v(t1) > xM(t1). Now v(t0) ≤ x0 = xM (t0) and
so, as in Theorem 3.3, there exists n ≥ 1 and a solution un(t) of
Corollary 4.1 (To Theorem 4.1). If v(t) is continuous on [a, b] and D+v(t)
≤ 0 on [a, b], then v(t) is nonincreasing on [a, b].
Proof. In Theorem 4.1, take φ(t, x) ≡ 0. Let a ≤ t0 < t1 ≤ b and consider the
IVP
It follows that xM(t) = v(t0), for all t ∈ [a, b] and that v(t) ≤ v(t0), for all t0 ≤
t ≤ b. In particular, v(t1) ≤ v(t0).
where (t0, x0) ∈ D, and if v(t) is a solution of x′ = ψ (t, x) with v(t0) ≤ x0,
then v(t) ≤ xM(t) on a common right interval of existence.
Proof. We have that v′(t) = ψ(t, v(t)) ≤ φ (t, v(t)) for all t in the maximal
interval for v(t). Moreover, v(t0) ≤ x0. It follows from Theorem 4.1 that v(t)
≤ xM(t) on a common right interval of existence.
Exercise 20. Using Corollary 4.2, prove that the solution of the IVP
which yields
for h > 0. The above inequality is true for all h > 0 such that t + h ∈ [a, b).
Hence,
implying, by continuity,
Hence
and so
Then,
and so
Exercise 21. Work out a corresponding result in that D ℓ ‖x(t)‖ exists on (a,
b] and Dℓ‖x(t)‖ ≤ ‖ x′(t)‖ on (a, b]. Here, you will take into account that h <
0.
Corollary 4.3. Assume that φ (t, x) is continuous and nonnegative valued
on an open D ⊆ ℝ × ℝ, and let xM(t) be the maximal solution of x′ = φ (t, x)
satisfying x(t0) = x0 ≥ 0. Let y(t) be a C(1) n-vector valued function on [t0, t0
+ a] such that ‖y′(t) ‖ ≤ φ ≤ (t,‖y(t)‖) on [t0, t0 + a] and ‖y(t0)‖ ≤ x0. Then
‖y(t)‖ ≤ xM(t) on any common right interval of existence of y(t) and xM(t).
Further if zm(t) is the minimal solution of x′ = – φ(t, x) satisfying x(t0) = x1,
and again, if y(t) is a C(1) n-vector valued function [t0, t0 + a] with ‖y′(t)‖ ≤
φ (t, y(t)‖) on [t0, t0 + a] and ‖y(t0)‖ ≥ x1, then ‖y(t)‖ ≥ zm(t) on any
common right interval of existence of y(t) and zm(t).
and if xM(t) exists on [t0, t0 + a], it follows that y(t) extends to [t0, t0 + a].
Proof. Assume that y(t) is a solution of (4.1) which does not extend to [t0, t0
+ a]. Then the maximal internal of existence for y(t) is of the form [t0, t0 +
η) with 0 < η ≤ a and ‖y(t)‖ → + ∞ as t → (t0 + η)−. However, on [t0, t0 +
η),
By Theorem 4.1, ‖y(t)‖ ≤ xM(t) on [t0,t0 +η]. But xM(t) exists on [t0, t0 + a];
thus, there exists such that on [t0, t0 + a], which
contradicts ‖y(t)‖ → +∞ as t → (t0+ η)–.
Therefore if y is a solution of (4.1), then y(t) extends to [t0, t0 + a]. ☐
Remark 4.2 (Application). Consider the IVP for the linear system.
One of the principal uses of Theorems 4.1 and 4.2 and the corollaries is to
obtain uniqueness theorems.
Theorem 4.3 (Kamke Uniqueness Theorem). Let f(t, y) be continuous on
Q = {(t, y)| t0 ≤ t ≤ t0 + a, ‖y – y0‖ ≤ b}. Let φ (t, u) be a real-valued
function satisfying the following conditions:
(1) φ is continuous on (t0, t0 + a] × [0, 2b].
(2) φ (t, 0) ≡ 0 on (t0, t0 + a].
(3) For any 0 < ∊ ≤ a, u(t) ≡ 0 is the only solution of u′ = φ (t, u) on (t0, t0
+ ∊] which satisfies u(t) → 0 and image.
Assume that for any (t, y1),(t, y2) ∈ Q with t > t0,
image
Then the IVP
image
has only one solution on any interval [t0, t0 + ∊], with 0 < ∊ ≤ a.
Proof. Assume the conclusion of the theorem is false so that the IVP y′ =
f(t, y), y(t0) = y0 has distinct solutions y1(t) and y2(t) on [t0, t0 + ∊] with 0 <
∊ ≤ a. Let y(t) ≡ y1 (t) – y2 (t). Then there exists a t1∈ (t0, t0 + ∊] such that
‖y(t1) ‖ = ‖y1 (t1) – y2 (t1) ‖ > 0 and ‖y(t)‖ < 2b on [t0, t1]. Then on (t0, t1],
image
Now let vm(t) be the minimal solution of the IVP,
image
and let (α, t1] be the left maximal interval of existence for vm(t) (of course,
(α, t1] ⊆ (t0, t1]).
It follows from part (3) of Exercise 19 that vm(t) ≤ ‖y(t)‖ on (α, t1]. We
claim that vm(t) can be continued to (t0, t1] such that 0 ≤ vm (t) ≤ ‖ y(t)‖ (not
necessarily as a minimal solution all the way to t0, but as a nonnegative
solution nevertheless).
First, if t0 < α < t1 and there exists α′ with α < α ′ < t1, such that vm(t) > 0
on (α′, t1] and vm(α′) = 0, then by continuity, image So,
image
is a solution of u′ = φ (t, u) and satisfies image.
For the other case, if t0 < α < t1 and vm(t) > 0 on (α, t1], since φ(t, u) is
bounded and continuous on [α, t1] × [0, 2b], it follows from our results of
continuation of solutions that vm(t) can be continued to a solution on [α, t1].
(i) If vm(α) = 0, repeat the argument of the previous case with α′ = α.
(ii) If vm(α) > 0, and of course 0 < vm (α) ≤ ‖y(α)‖, then vm(t) could be
continued yet further to the left as a minimal solution of u′ = φ (t, u) (with
u(α) = vm(α)), and satisfy 0 ≤ vm(t) ≤ ‖y(t)‖ on (α – δ, t1], for some δ > 0,
and hence as such is still a minimal solution of u′ = φ (t, u), u(t1) = ‖y(t1)‖,
which is a contradiction to the fact that (α, t1] is left maximal.
Therefore, either by construction, or from the impossibility of case (ii)
above, it follow that 0 ≤ vm(t) ≤ ‖y(t)‖ on (t0, t1]. Hence
image
So, vm (t) → 0, as t → t0. Also,
image
and so as t ↓ t0, we have
image
i.e., image, as t ↓ t0. From condition (3), vm(t) ≡ 0 on (t0, t1]; this is a
contradiction to vm(t1) = ‖y(t1)‖ > 0.
Therefore, it follows that y1(t) and y2(t) are not distinct solutions of the
IVP. ☐
Corollary 4.5 (Nagumo). If f(t, y) is continuous on Q = {(t, y)| t0 ≤ t ≤ t0 +
a, ‖y – y0‖ ≤ b} and if for any points (t, y1), (t, y2) ∈ Q with t > t0, image,
then the solution of the IVP
image
is unique to the right.
Proof. Define image on (t0, t0 + a] × [0, 2b]. Then φ satisfies (1) and (2)
of Theorem 4.3. Consider (t1, u0) ∈ (t0, t0 + a] × [0, 2b] and the IVP
image
It follows that image is the unique solution. It is the case that u(t) → 0 as t
→ t0, however, image unless u0 = 0. Thus condition (3) of Theorem 4.3 is
also satisfied.
Therefore, there is a unique solution to the right by Theorem 4.3. ☐
Theorem 4.4. Let f(t, y) be a continuous real-valued function on Q = [t0, t0
+ a] × [y0 – b, y0 + b] ⊆ ℝ × ℝ. Let φ (t, u) be a continuous real-valued
function on (t0, t0 + a] × [0,2b] and assume φ is nondecreasing in u for each
fixed t. Then, if the hypothesis of Theorem 4.3 are satisfied, it follows that
the sequence of Picard iterates image, n ≥ 1, converges uniformly on [t0,
t0 + α] to a solution of the IVP
image
where α = min image, M = maxQ | f (t, y) |.
We note that Theorem 4.4 applies to first order scalar equations.
Chapter 5
Linear Systems of Differential Equations
5.1Linear Systems of Differential Equations
We have previously shown that, if ‖x‖ = max1≤i≤n|xi|, for x ∈ ℝn, then for
A a constant matrix,
Now, we want to discuss the induced norm inline, where inline is the
Euclidean norm of ℝ. Prior to this, we will discuss some properties of the
inner product. An inner product is a mapping inline, where inline is the
set of complex numbers, defined by:
Then
(4) inline
Lemma 5.1. Let x = (x1, x2, …, xn), y = (y1, y2, …, yn)∈ inlinen. Then
Proof. The first part is true by the triangle inequality and after that, the
Schwartz inequality applies.☐
so that
Hence,
This expression will not equal ‖A‖ unless the rows and columns of A are
scalar multiples of each other. In a later setting, we may make use of this
expression as an upper bound for ‖A‖.
We now state and prove some basic theorems concerning linear systems.
Lemma 5.2. Assume A(t) and f(t) are continuous on I ⊆ ℝ. Then for each
point (t0, x0) ∈ I × ℝn, each of the IVP’s
Proof. Since f(t) ≡ 0 in (5.1), we will directly deal with (5.2). First, for x1,
x2 ∊ ℝn, we have
is x(t) ≡ 0.
Our next results are concerned with the solution space of (5.1).
Theorem 5.2. Assume that A(t) and f(t) are continuous on I ⊆ ℝ. Let x1(t),
x2(t), …, xm(t) be solutions of (5.1); let α1, α2, …, αm be constants. Then
inline is a solution of (5.1). Moreover, if y1(t) and y2(t) are solution of
(5.2), then y1(t) − y2(t) is a solution of (5.1).
Proof. First, by
Proof. For the first assertion, assume x1(t), …, xm(t) are solutions of (5.1)
and that α1, …, αm are scalars and t0 ∊ I such that inline. By Theorem
5.2, inline is a solution of (5.1). Moreover, x(t0)= 0, and so by Corollary
5.1, x(t) ≡ 0 on I.
Assume now that x1(t), …, xn(t) are solutions of (5.1) such that at some t0,
x1(t0), …, xn(t0) are L.I. vectors in ℝn. Such solutions exist; e.g., for each 1
≤ j ≤ n, let xj(t) be the solution of
We claim that any such set x1(t), …, xn(t) are L.I. vectors in ℝn, for all
points t ∊ I. If not, then there are scalars α1, …,αn not all zero and t1 ∊ I,
such that inline. By the first assertion, inline, and in particular inline,
which contradicts to the L.I. of inline. Hence inline are L.I. in inlinen,
We now show that x1(t), …, xn(t) span the solution space of (5.1). Let z(t)
be any solution of (5.1). Since x1(t0), …, xn(t0) are L.I. in ℝn, they must
constitute a basis for ℝn. Now z(t0) ∊ ℝn, so there are scalars α1,…,αn such
that inline. Now z(t) and inline are both solutions of
and so by Lemma 5.2, inline, for t ∊ I.
Therefore, inline span and hence form a basis for the solution space of
(5.1). ☐
corollary 5.2. The solution set of (5.2) consists of all y(t) = inline, where
y0(t) is some fixed solution of (5.2) and x1(t),…, xn(t) are L.I. solutions of
(5.1).
Then the solution of (5.2) will be inline, since y(t) − y0(t) and inline
satisfy the same IVP for (5.1)☐
Exercise 24. Assume that f(t, x) is continuous and has continuous first
partial derivatives wrt the components of x on an open set D ⊆ ℝ× ℝn. Let
(t0, x0) ∊ D and (α, ω) be the maximal interval of existence of x(t; t0, x0),
the solution of inline and xj(t) = inline Show inline on (α, ω), where
fj(t, x) is the jth component of f(t, x); i.e.,
5.2Some Properties of Matrices
Let inline denote the set of all n × n matrices with complex entries. Then,
we have the following properties of inline.
(1) inline is a vector space over inline with vector operations defined
for α ∊ inline, A = (aij) and B = (bij) as
(3) AT = (aij)T = (aji) and so (AB)T = BTAT. Also, inline and the adjoint,
inline. The identity matrix I = (δij), and if associated with A, there is a
matrix B such that AB = BA = I, then A is said to be nonsingular. We write
A –1 = B.
If A ∊ inline is such that det A ≠ 0, and B = (bij), where bij is the cofactor
of aij, then A is nonsingular and inline.
Exercise 25. Prove if A is nonsingular, then det A ≠ 0. (Note: det (AB) = det
A. det B).
The range space inline, and the null space inline = inline. From liner
algebra, inline inline. Moreover,
(ii)‖Ax‖ ≤ ‖ A‖‖ x ‖.
(iii)‖α A‖ = ‖ α ‖‖ A‖.
(iv)‖A + B‖ ≤ ‖ A‖ + ‖ B ‖.
(v)‖AB‖ ≤ ‖ A‖‖ B ‖.
☐
Proof of (v). Notice inline, for any x. If inline, then inline and so
‖AB‖ ≤ ‖A‖‖B‖.☐
Proof Let {Ak} be a Cauchy sequence in inline. Then for each ε > 0,
there exists Nε such that d(Ak, Al) = ‖ Ak − Al‖ < ε,for all k, l ≥ Nε. Thus,
for all 1 ≤ p, q ≤ n,
which converges to e|t|‖ A‖, for all |t|,‖A‖. Thus, for this series of numbers,
the Cauchy criterion is satisfied by its sequence of partial sums, inline.
Hence, for each ε > 0, there exists Nε > 0 such that
Now let {Sn} be the sequence of partial sums associated with inline. Then
for the same ε > 0 and for all n > m ≥ Nε,
Hence, {Sn} form a Cauchy sequence in inline and hence {Sn} converges
by the completeness of inline; I.e., inline converges, for all t, A.
Since the above series closely resembles the exponential series, we define
inline.
yields
which implies
(8) inline
(9) Suppose the matrix Φ (t) satisfies the D.E. Φ ′(t) = A(t) Φ (t). Then,
Now from our previous theory, since h(t, x)= A(t)x + f(t), where A(t) ∊
inline is continuous and f(t) ∊ ℂn is continuous, h(t, x) satisfies a
Lipschitz condition wrt x on each slab [a, b]× ℂn, it follows from the Picard
Existence Theorem that the linear system
image
and is also the uniform limit of Picard iterates on each compact subinterval
of J.
In summary, if A(t) and f(t) are continuous on J, then the unique solution of
the IVP (5.4) can be written as
In a similar way, we can take under our consideration IVP’s for a matrix
system,
image
The matrix IVP (5.6) is equivalent to the IVP for a system of n2 scalar
linear equations
Also, inline is a solution of the matrix equation iff each of its column
vectors inline is a solution of the vector equation
where bj(t) is the jth column of B(t). (Note: One obtains the jth column of
A(t) Φ (t) by multiplying A(t) by the jth column of Φ (t); I.e., A(t)Φ j(t).)
In light of our previous discussions, the unique solution of the IVP
By Picard iteration,
Observe that since this is a solution of the D.E. above, we also have
Exercise 27.
Proof. Notice
Proof. Assume that for some t0 ∊ J, X(t0) is singular. Now recall that a
matrix B ∊ inline is singular iff there exists c ∊ ℂn, c ≠ 0, such that Bc =
0. Thus there exists c ≠ 0 such that X (t0) c = 0.
Note: Lemma 5.4 says that if c ∊ inline (X(t0)), for some t0, then c ∊
inline (X(t)), for all t ∊ J. This is strong and is due to the fact that X(t) is a
solution of the D.E.
Theorem 5.4. Let X(t) be a solution of X′ = A(t)X and let t0 ∊ J. Then det
X(t) = det inline, where inline. (Note: This also confirms Lemma 5.4.)
then
(By elementary row operations, every row cancels with each of the sums in
the kth row.) Hence,
Consider now Theorem 5.4 applied to the nth order linear equation,
which is called the Wronskian of the solutions x1(t), …, xn(t) of the nth
order linear equation (5.8) and is denoted by W(t; x1, …, xn). From
Theorem 5.4,
will be denoted by X(t, t0) (i.e., this is the solution which satisfies X(t0, t0)
= I).
Definition 5.2. The system x′ = –A*(t)x is called the adjoint system wrt the
system x′ = A(t)x.
Exercise 28. (i) Show that for any s, t, t0 ∊ J, X(t, t0) = X(t, s)X(s, t0).
Hence [X(s, t0)]−1 = X(t0, s).
(ii)Let A(t) be continuous on [0, +∞) and assume that ‖x(t)‖ Is bound on [0,
+∞) for each solution of the vector equation x′ = A(t)x. Let X(t) be a
fundamental matrix solution of X′ = A(t)X. Then prove that ‖X−1(t)‖ is
bound on [0, +∞) iff Re is bound below on [0, +∞).
Proof. We seek a solution z(t) of x′ = A(t)x + f(t) of the form z(t) = X(t)y(t),
and we try to determine y(t). Now inline.
which yields
Consequently,
and so
can be written as
Lemma 5.5. x1(t), …, xm(t) are L.I. in the vector space C(n−1)[J, ℂ] iff
y1(t), …, ym(t) are L.I. in the vector space C[J, ℂn].
Proof. Assume that y1, …, ym are L.D. in C[J, ℂn]. So, there exist α1, …,
αm, not all zero, in ℂ such that on J. Hence, the sum in the
first component is zero. In other words, α1x1(t) + … + αmxm(t) ≡ 0 on J.
Therefore, x1, …, xm are L.D.
Conversely, assume that x1, …, xm are L.D. in C(n−1)[J, ℂ]. Then, there
exist α1, …, αm, not all zero, in ℂ such that α1x1(t)+…+αmxm(t) ≡ 0 on J.
Upon differentiating n−1 times, we have on J, for
all 1 ≤ i ≤ n−1. From the manner in which the yi(t) were constructed, we
have α1y1(t)+…+αmym(t) ≡ 0 on J, or y1, …, ym are L.D.
In fact, the fundamental matrix solution X(t) of (5.11) satisfying the initial
condition X(t0) = I has as the jth element in the first row, the solution xj(t)
of (5.9) which satisfies
Thus, if x1, …, xn satisfy the above initial conditions, then by Theorem 5.4,
Example 5.2. Let x1(t) and x2(t) be solution of x′′ + 2tx′ − t2x = 0 satisfying
the respective initial conditions
Then
Now the unique solution of the IVP corresponding to (5.9)
Recall
And u(t, s) is frequently called the Cauchy Function for the equation
In summary, the unique solution of the IVP
is given by
Example 5.3.
(1) x(n) = 0.
is given by
(2) x′′ + x = 0.
is given by
(3) From (1), we can derive Taylor’s formula with remainder.
Theorem 5.6. Let f(t) ∊ C(n − 1)[a, b] and suppose f(n)(t) exists and is
integrable on [a, b]. Then
This has a unique solution which must be f(t). The solution of the
homogenous equation x(n) = 0 is given by a suitable combination of the
solutions xj(t) satisfying
where
Suppose x1(t) and x2(t) are L.I. solutions of the homogeneous equation x″ +
p1 (t)x′ + p2 (t)x = 0. Then we sought (in the elementary differential
equation course) a solution of (5.13) in the form z(t) = c1 (t)x1 (t) + c2 (t)x2
(t) (i.e., the “constants” vary, hence the name).
Now if we assume inline, and if inline, then it is the case that z(t) = c1
(t)x1 (t) + c2 (t)x2 (t) is a solution of (5.13).
Thus, if
From the associated vector system y′(t) = A(t)y, consider the adjoint system
y′(t) = − A*(t)y, where
If the pi’s are continuous on J, then any solution y(t) of the adjoint system
belongs to C(1) [J,ℂn]. To consider a scalar equation adjoint to (5.14), one
is usually concerned with differentiability of products inline.
Exercise 30. The first part of this exercise is not related to the above
theory, but perhaps you will be able to complete it successfully. Assume
that p1 (t),…,pn (t) are continuous on an interval J. Then, if [a, b] ⊆ J, any
solution x(t) ≢ 0 of x(n) + p1 (t)x(n − 1) + … + pn (t)x = 0 can have at most a
finite number of zeros on [a, b]. Is this true for the nth component yn(t) of a
solution y(t) ≢ 0 of the adjoint system? (For this second part, use the above
theory.)
Exercise 32. Prove that X′ = A(t)X has a fundamental matrix solution X(t)
which is unitary; that is, X−1(t) = X* (t), iff A(t) ≡ − A*(t). Is this possible
for systems obtained from nth order scalar equations?
Exercise 33. If A(t) ≡ −A*(t) (i.e., A(t) is skew-symmetric), show that for
any solution x(t) of x′ = A(t)x, 〈x(t), x(t)〉 Is a constant.
For some time now, we will consider systems of equations with constant
coefficient matrices. Let A be a constant matrix.
Lemma 5.6. The eigenvalues of A are the roots of the polynomial equation
det[A − λI] = 0.
Definition 5.4 The det[A − λI] is called the characteristic polynomial of the
matrix A.
Proof. Assume that c1,…,cm are not L.I. Then, there exists a first integer j
with 1 < j < m such that c1,…,cj−1 are L.I., but c1,…,cj are L.D. Hence,
there exist scalars α1,…, αj, not all zero, such that α1c1 + … + αjcj = 0.
Since Aci = λci, we have, 0 = A. 0 = A(α1c1 + … + αjcj) = α1λ1c1 + … +
αjλjcj. Moreover, λj ≠ 0. For if λj = 0, we would have above that α1λ1c1 +
… + αj−1λj−1cj−1 = 0 and λi ≠ 0, for 1 ≤ i ≤ j − 1. So, αiλi = 0, 1 ≤ i ≤ j − 1,
since c1,…,cj − 1 are assumed L.I., thus αi = 0, 1 ≤ i ≤ j − 1. Since α1c1 + …
+ αjcj = 0 and cj ≠ 0, we would then have αj = 0, which is a contradiction to
the assumption that they were not all zero.
Consequently λj ≠ 0, and since the λi’s are all distinct, we have , for all
i ≠ j.
which implies
Now
yields
If
Then,
which yields
Let
Now
We could extend the basis for the (B) to a basis for (B2) = ℂ4 or we
could find another basis for ℂ4.
Take {e1,e2,e3,e4} as our basis for (B2). Apply B to each of these basis
vectors as is done in the calculation above denoted by (5.17). Since B2 (any
vector) = (0), from the form of a solution given in (5.18), four solutions
will be of the form (note: r = 2, λ = 2 relative to (5.17))
We have
as
so that
Here, our x1(t) and x2(t) are linear combinations of those found using the
first method.
where λq+i’s on the diagonal are the same and 1’s are on the super
diagonal.
The number λ1, λ2, …, λq+s are the eigenvalues of A and are repeated in J
according to their multiplicities. Furthermore, if λj is a simple eigenvalue,
then λj appears in J0, hence, if all the eigenvalues are simple, J is a diagonal
matrix. The matrix J is called the “Jordan Canonical Form of A” and is
unique up to rearrangements of the blocks Ji, 0 ≤ i ≤ s, along the diagonal.
(So, there exists a C such that C-1AC = J.)
Let us look at the previous example and see how the Jordan form relates.
Recall x′ = Ax, where
We found that det[A – λI] = (λ – 2)4 and
and B2 = (0). Extending the basis for (B) to a basis for (B2) = ℂ4, we
obtained
we have Bc2 = −2c1 and Bd2 = d1. Hence [A − 2I]c2 = −2c1 and or, Ac2 =
2c2 − 2c1. On the other hand, Bc1 = [0] ⇔ [A − 2I]c1 = 0. So, Ac1 = 2c1.
Now Bd2 = d1 yields Ad2 = 2d2 + d1 and Bd1 = [0] yields Ad1 = 2d1. Form
C by taking c1, c2, d1, d2 as its columns; i.e., C = [c1, c2, d1, d2]4×4.
Now
where C−1C = I because
This is almost the Jordan form. We need to normalize the −2 entry. Thus,
we return to the point where we extended our basis and we choose
Hence, if X(t) = CY (t)C−1, and if we can find Y (t) = etJ, we have an idea of
what X(t) = etA looks like.
where
Thus
Hence, calculating etJ is reduced to calculating for each block.
First,
yield . Hence
that is,
Thus, we know what each block of etJ looks like. Since etA = CetJC−1
where C is a nonsingular constant matrix, the elements in etA are of the
form where each pj(t) is a polynomial in t. Furthermore, if the
largest of the blocks, Ji, 1 ≤ i ≤ s, (not J0), is an m × m matrix, then all
polynomials appearing in the elements of etA are of degree ≤ m − 1.
Remark 5.5.
(1)If Re λj < 0, for each eigenvalue λj of A, then all solutions x(t) are such
that is bounded on [0, +∞) and as t → +∞.
Reason: Using the max norm, where x(t) is a vector, if we look at a piece of
the polynomial of some entry, if Reλj < 0, as t → +∞.
So, as t → +∞,
(2)If Re λj ≤ 0, for each eigenvalue λj of A, then all the solutions x(t) are
bounded on [0, +∞) iff the λj’s for which Reλj = 0 appear only in J0, the
diagonal block in the Jordan canonical form of A.
Thus the only way for to be unbounded is that an element other than
zero must be off the diagonal (where Reλj ≤ 0).
then
Note that, if A is nonsingular, then all the eigenvalues are nonzero, and
hence, each eigenvalue has a logarithm (in the complex sense, log λ = ln|λ|
+ i arg λ).
Now
because
and we obtained
since the blocks of the Jordon form are the same, we need only to calculate
the logarithm for one block.
Note: m = 2 and λ = 2, so
Hence,
Therefore,
Chapter 6
Remark 6.1. Since Y (t) is continuous on ℝ and periodic, each entry in Y (t)
is continuous and periodic, hence bounded. So, ‖Y(t)‖ is bounded on ℝ.
Moreover, by the way the determinant is defined, det Y(t) is periodic. Since
Y(t) ≠ 0, det Y(t) is bounded away from zero, and since it is periodic, then it
is bounded. Consequently ‖Y−1(t) is bounded.
Definition 6.2 Let A(t) and B(t) be defined on [a, +∞). Then A(t) is said to
be kinematically similar to B(t) on [a, +∞) in case there is an absolutely
continuous (we will think differentiable) nonsingular matrix function L(t)
on [a, +∞) such that ‖L(t)‖ and ‖L−1(t)‖ are both bounded on [a, +∞) and
such that the transformation x = L(t)y transforms the system x′ = A(t)x into
the system y′ = B(t)y; i.e., L−1(t)A(t)L(t) − L−1(t)L′(t) ≡ B(t) on [a, +∞).
Exercise 35. (i) Show that kinematic similarity is an equivalence relation on
the set of all continuous n × n matrix functions on [a, +∞).
(ii) Show that if A(t) is continuous and has period ω on (−∞,∞), then A(t)
is kinematically similar to a constant matrix on ℝ.
Note: Kinematic similarity can be thought of as an extension of
similarity, by taking L to be a constant matrix.
Now let A(t) be continuous and ω-periodic on ℝ. If X(t) is a fundamental
matrix solution of X′ = A(t)X, then there is a nonsingular matrix C such that
X(t + ω) = X(t)C, for all t ∈ ℝ.
Remark 6.2. The matrix C is not uniquely determined by A(t), however C
is unique up to similarity.
Proof. Let X(t) and Φ(t) both be fundamental matrix solutions of X′ = A(t)X.
Then there are matrices C and D such that X(t + ω) = X(t)C and Φ (t + ω) =
Φ (t)D. Furthermore, there is a nonsingular matrix B such that X(t) = Φ (t)B,
since they are both fundamental matrix solutions. So,
Hence, C = B−1DB; i.e., C and D are similar. ☐
It follows then that A(t) does uniquely determine the eigenvalues of C,
because similar matrices have the same eigenvalues. In fact, these
eigenvalues are called the characteristic multipliers of the periodic system
X′ = A(t)X.
For the above matrix C, where X(t + ω) = X (t)C, if σ1, …, σn are the
eigenvalues of C and λ1, …, λn are the eigenvalues of R, where C = eωR
(i.e., ), then if both sets of eigenvalues are properly ordered, we will have
that . The numbers λj are not uniquely determined, but are determined up to
additive integer multiples of . The numbers λj, 1 ≤ j ≤ n, are called the
characteristic exponents of the periodic system X′ = A(t)X.
Remark 6.3. Corresponding to each eigenvalue σ of C, where X(t + ω) =
X(t)C, there exists a nontrivial vector solution x(t) of the vector system x′ =
A(t)x such that x(t + w) = σx(t) for all t ∈ ℝ.
Proof. Let σ be an eigenvalue of C and let x0 be an associated eigenvector,
then Cx0 = σ x0. Let x(t) be a solution of x′ = A(t)x given by x(t) ≡ X(t)x0.
Then,
Remark 6.4. This last remark tells us when we can obtain periodic
solutions of a periodic system; i.e., if 1 is an eigenvalue of C, then the
system x′ = A(t)x has a nontrivial ω-periodic solution.
Suppose σ is an nth root of unity and σ is an eigenvalue of C. Then there
exists a nontrivial solution x(t) with x(t + ω) = σx(t). We iterate this; that is,
replace “t” by “t + ω”. Hence, x(t + 2ω) = σ x(t + ω) = σ 2x(t),…, x(t + nω)
= σnx(t) = x(t), (since σn = 1). Thus, in the case, x′ = A(t)x has an nω-
periodic solution.
Remark 6.5. For our next observation, if σ is an eigenvalue of C and |σ| ≤
1, then x′ = A(t)x has a nontrivial solution x(t) which is bounded on [0, ∞).
(Note: C is nonsingular, thus σ ≠ 0 and σ−1 exists.) To see the bounded part,
let t ∈ ℝ be arbitrary, but fixed, and let ω ∈ ℝ. See the Figure 6.1.
Fig. 6.1 The points Kω, K = 0,1,2,….
we find .
Therefore, x(t) is ω-periodic iff .
If x0≠ 0, then
and
every solution of (6.1) can be written in the form of , where x(0) = x0.
Now (6.1) has an ω-periodic solution iff x(0) = x(ω) by Theorem 6.2.
For x(0) = x(ω), we must have
where
where
is given by .
Now, if 1 is not an eigenvalue of x(ω), then .
So in the case, x′ = a(t)x + f(x) has a periodic solution for each ω-
periodic f(t) given by where
We would next like to show the existence of solutions which are periodic
using a fixed point theorem. One of the interesting asides of this fixed point
theorem is that we can also apply it to an alternate proof of the Picard
Existence Theorem.
Theorem 6.3 (Contraction Mapping Principle). Let M,d be a complete
metric space and assume that there exist a function T: M → M and a
constant K, 0 ≤ K < 1, such that d(T(x),T(y)) ≤ K d(x, y), for each x, y ∈ M.
Then T has a unique fixed point; i.e., there exists a unique x0 ∈ M such that
T(x0) = x0.
So x0 is “a” fixed point of T. For the uniqueness, let y0 also be a fixed point
of T. Then
with metric
Hence,
Now
and so,
So,
with metric d(ϕ, ψ) = maxt∈ℝ |ϕ(t) − ψ (t)| = max0≤t≤w |ϕ(t) − ψ (t)|. Then
M, d is a complete metric space.
Theorem 6.5 Let a(t) ∈ M and let f(t, x) be continuous on ℝ× ℝ with f(t, x)
ω-periodic in t, for each fixed x (Note that for ϕ ∈ M, f(t, ϕ(t)) is ω-
periodic.) Assume , for all (t, x),(t, y) ∈ ℝ × ℝ, and . Then the D.E. x′ =
a(t)x + f(t, x) has a unique ω-periodic solution.
Proof. If then x′ = a(t)x does not have nontrivial ω-periodic solution.
Hence, we can form a Green’s function G(t, s).
Now define T: M → M via
Consider
Stability Theory
Definition 7.1. The solution x(t; t0, x0) is said to be stable on [t0, ∞) in case
for each ε > 0, there exists δ > 0 such that ‖x1 – x0‖ < δ implies that all
solutions x(t; t0, x1) exist on [t0, ∞) and ‖x(t; t0, x1) – x(t; t0, x0)‖ < ε on [t0,
∞).
(2) x′ = −x on [0, ∞). In this case x(t; 0, 0) ≡ 0. Let x(t; 0, x1) be any other
solution satisfying x(0) = x1. Then . Consider |x(t;0, x1)
– x(t; 0, 0)| = |x1 | e−t ≤ |x1|. Thus taking δ = ε, and the solution x(t; 0, 0)
is stable on [0, ∞).
(3) x′ = x on [0, ∞). Again, x(t; 0, 0) ≡ 0. For any and so
0 is unstable on
[0,∞).
Definition 7.2. The solution x(t; t0, x0) is said to be asymptotically stable on
[t0, ∞) in case
(2) there exists η > 0 such that ‖x1 – x0‖ < η implies
Note: The above definitions of stability and asymptotic stability are due
to Lyapunov.
Definition 7.3. The solution x(t; t0, x0) is said to be uniformly stable on [t0,
∞) in case for each ε > 0, there exists δ ε > 0 such that if t1 ≥ t0 and ‖x1 – x
(t1 ;t0,x0)‖ < δε, then ‖x(t; t1, x1) – x(t; t0, x0)‖ < ε on [t1, ∞).
Fig. 7.2 Uniformly stable solution.
Obviously,
Exercise 37. Show that the following definition of stability is equivalent to
the given one if solutions of IVP’s for x′ = f(t, x) are unique: x(t; t0, x0) is
stable on [t0, ∞) in case given t1 ≥ t0 and given ε > 0, there exists δ (t1, ε) >
0 such that ‖x1 – x(t1; t0, x0)‖ < δ implies: x(t; t1, x1) exists on [t1, ∞) and
‖x(t; t1, x1) – x(t; t0, x0)‖ < ε on [t1, ∞).
Note: This clearly implies Definition 7.1 by taking t1 = t0. Hint for the
converse: Let xn be a sequence of tuples with xn → x(t1; t0, x0) and consider
the sequence of solutions x(t; t1, xn). Applying the Kamke Theorem, the
solutions approach x(t; t0, x0) on compact subintervals of [t0, ∞). Then apply
Definition 7.1 and the stated property in the exercise will be satisfied.
Note then that our Definition 7.1 and the statement in Exercise 37 are not
equivalent, if solutions of IVP’s are not unique to both the right and the left.
This says that our Definition 7.1 of stability gives uniqueness of solutions to
the right but not necessarily to the left.
Example 7.3 (Uniformly Asymptotically Stability). Consider the solution
x(t) ≡ 0 of x′ = −x on [0, +∞).
(1) Any solution is of the form . Hence .
Thus, taking δ = ε, x(t) ≡ 0 is uniformly stable.
(2) Let δ0 be a fixed positive number; then |x1 – x(t; 0, 0) | = |x1| < δ 0
implies lim t → ∞ |x(t; t1, x1) – x(t; 0, 0) | = lim t → ∞ |x1|e−(t −t1) = 0.
Thus, any δ0 > 0 works for this part.
(3) For this part, we proceed to find T(ε). Now
, where |x1| < δ0. (For t ≥ t1 +
T(ε), take . Then, Take , so that
if −(t ̵ t1) ≤ −T(ε), then . Therefore, for ,
and then show that not only are the x-values close to 0, but so also are the
x′-values; i.e.,
are close to 0.
Example 7.4. Consider the solution x(t) ≡ 0 of the scalar equation x′ = a(t)x.
Now .
(1) x(t) ≡ 0 is stable on [0, ∞) iff Re ≤ M(constant) on [0, ∞).
To see this, take
Now if x(t) is a solution in the δε-tube at t1n, it does not stay in the ε-tube.
In fact, let x(t) be a solution such that , then for
t ≥ t1n. But from above, x(t2n) ≥ which can be
made arbitrarily large for n large enough. In particular |x(t2n) image ε
for large n and hence the zero solution is not uniformly stable. See Figure
7.6
image
Fig. 7.6
For our next consideration, we discuss stability concepts for a vector linear
system
image
where A(t) and f(t) are continuous on [t0, ∞). Let x(t; t0, x0) be a solution of
(7.1). This solution is stable on [t0, ∞) in case for each ε > 0, there exists δε
> 0 such that ‖x1 – x0‖ < δ implies ‖x (t; t0, x1) – x(t; t0, x0)‖ < ∊ on [t0, ∞).
Let
image
be the homogeneous system and let “x(t)” and “y(t)” denote solutions of
(7.1) and (7.2) respectively.
Note that
image
Conversely,
image
Remark 7.1. The solution x(t; t0, x0) of (7.1) is stable on [t0, ∞) iff the zero
solution of (7.2) is stable on [t0, ∞).
Proof. First, let us note that the zero solution of (7.2) is stable on [t0, +∞) in
case, given ∊ > 0, there exists δ ∊ > 0 such that ‖y1‖ < δ implies ‖y(t; t0,
y1)‖ < ∊ on [t0, +∞).
Now let us suppose first that x(t; t0, x0) is stable and let ∊ > 0 be given
and δ ∊ be the corresponding delta.
Now let y1 be such that ‖y1‖ < δ∊. Then ‖y(t; t0, y1)‖ = ‖x(t; t0, y1 + x0)
−x(t; t0, x0)‖ < ∊ by the stability of x(t1; t0, x0). Therefore, the zero solution
of (7.2) is stable.
The converse is similar. Assume the zero solution of (7.2) is stable with
∊ and δ ∊ as usual. If ‖x1 – x0‖ < δ∊, then ‖x(t; t0, x1) – x(t; t0, x0)‖ = ‖y(t;
t0, x1 – x0) < ∊ by the stability of zero solution of (7.2). So, the solution x(t;
t0, x0) is stable on [t0, ∞).
Remark 7.2. The same argument shows that for each of the other four types
of stability, a fixed solution of (7.1) has that type of stability iff the zero
solution of (7.2) has that same type of stability.
Therefore, for a fixed equation (7.1), all solutions have the same type of
stability, because of the “iff” of the stability of the zero solution of (7.2),
and the type of stability is determined by the homogeneous system (7.2).
Hence, we say that the system (7.2) is stable, is unstable, is asymptotically
stable, etc. ☐
Theorem 7.1. Let A(t) be a continuous n × n matrix function on [t0, ∞), and
let X(t) be the solution of
image
Then the system (7.2), x′ = A(t)x, is
(1) Stable on [t0, ∞) iff there exists K > 0 such that ‖X(t)‖ ≤ K on [t0, ∞).
(2) Uniformly stable on [t0, ∞) iff there exists K > 0 such that ‖X (t) X−1
(s)‖ ≤ K, for all t0 ≤ s ≤, t.
(3) Strongly stable on [t0, ∞) iff there exists K > 0 such that ‖X(t)‖ ≤ K and
‖X−1(t)‖ ≤ K on [t0, ∞).
(4) Asymptotically stable on [t0, ∞) iff limt→∞‖ X(t)‖ = 0.
(5) Uniformly asymptotically stable on [t0, ∞) iff there exist K > 0 and α >
0 such that
image.
Proof. (1) Assume that ‖X (t)‖ ≤ K. The solution x(t; t0, x0) = X(t)x0, so
‖x(t; t0, x0‖ = ‖X (t)x0‖ ≤ K‖x0‖ < ∊, provided image. The zero solution
of (7.2) is stable which is what we mean by saying the system (7.2) is
stable.
Conversely, assume x′ = A(t)x is stable on [t0, ∞). Then, for ∊ = 1, there
exists δ 0 > 0 such that ‖x0‖ < δ 0 implies ‖x(t; t0, x0)‖ < 1 on [t0, ∞). So,
‖X(t)x0‖ < 1 on [t0, ∞). Hence, image. Now let y0 be a vector such that
‖y0‖ < 1. Let x0 = δ0 y0, then ‖x0‖ < δ0. Hence, image for all t∈ [t0,
∞). By the definition of matrix norm, image. Taking image, the
converse is satisfied.
(2) Assume now that ‖X (t)X−1(s) ≤ K. Let s ∈ [t0, ∞) and ∊ > 0 be
given. Then for t ≥ s, ‖x (t; s, x0)‖ = ‖ X (t) X−1(s)x0‖ ≤ K‖x0‖ < ∊, for s ≤ t,
provided image (which is independent of s). Thus (7.2) is uniformly
stable.
Conversely, assume (7.2) is uniformly stable on [t0, ∞). Then, given ∊ >
0, there exists δ∊ > 0 such that t1 ≥ t0 and ‖x0‖ < δ imply ‖ x(t; t1, x0‖ < ∊
on [t1, ∞). Hence, ‖ X(t) X−1(t1) x0‖ < ∊ on [t1, ∞). By taking ∊ = 1, the
same argument as above can be used to show that image on [t0, ∞).
Remark 7.3. The system (7.2) is stable on [t0, ∞) iff each solution of x′ =
A(t)x is bounded on [t0, ∞).
(3) Assume ‖X (t)‖ ≤ K and ‖X−1(t)‖ ≤ K on [t0, ∞). Let x(t; t1, x0) be a
solution with x(t1) = x0. Then ‖x (t; t1, x0)‖ = ‖ X t)X−1(t1)x0‖ ≤ image.
Thus (7.2) is strongly stable on [t0, ∞).
For the converse, assume (7.2) is strongly stable on [t0, ∞). Then given
any t1 ≥ t0, and any ∊ > 0, (in particular, take ∊ = 1), there exists δ 0 > 0
such that ‖x0‖ ≤ δ 0 implies ‖x(t; t1, x0)‖ < 1 on [t0, ∞). There are two cases:
Case 1: Take t1 = t0. Then, ‖x(t; t0, x0)‖ = ‖ X(t)x0‖ < 1 on [t0, ∞) for ‖
x0‖ < δ 0. Part (1) of this theorem, this implies image.
Case 2: Take t = t0. Then,
image
for ‖x0‖ < δ 0. Then, as before image. But t1 is arbitrary, hence, image.
(4) Assume that ‖X (t)‖ → 0 as t → +∞. Since ‖X (t)‖ is continuous on
[t0, ∞), ‖X (t)‖ is bounded on [t0, ∞). By part (1) of this theorem, the
equation (7.2) is stable on [t0, ∞). Secondly, if x0 ∈ ℝn, then ‖x (t; t0, x0)‖ =
‖ X (t)x0‖ ≤ ‖ X (t)‖‖x0‖. Hence, limt→ ∞ ‖x (t; t0, x0)‖ = 0. Therefore (7.2)
is asymptotically stable on [t0, ∞).
For the converse, if (7.2) is asymptotically stable on [t0, ∞), then (7.2) is
stable, hence ‖X (t)‖ is bounded (see the remark above). Also, by the
asymptotically stability of (7.2), there exists η > 0 such that ‖x0‖ ≤ η
implies lim t→∞‖x (t; t0, x0)‖ = 0. But we don’t need η > 0 such that ‖x0‖ ≤
η; i.e., given x0 ∈ ℝn, there exists K ∈ ℝ such that image, and we also
have image. Thus image, for all x0 ∈ ℝn.
So let ∊ > 0 be given. Then for each 1 ≤ j ≤ n, there exists tj ≥ t0 such
that ‖x (t; t0, ej)‖ < ∊ on [tj, +∞), since limt→ ∞ ‖x (t; t0, ej)‖ = 0.
Let T = max1≤j ≤ n{tj}. Then for any x0 ∈ ℝn, with ‖x0‖ ≤ 1, and any t ≥
T, we have
image
Then, ‖X (t)x0‖ = ‖x (t; t0, x0) ≤ n ∊, for all t ≥ T and all ‖x0‖ ≤ 1. since x (t;
t0, x0)= X (t). So, ‖X (t)‖ ≤ n∊, for all t ≥ T. Therefore, ‖X (t)‖ → 0 as t →
∞.
(5) Assume first that image. Then by part (2) of this theorem, (7.2)
is uniformly stable on [t0, ∞). Next, given image (we use image
so that ln image).Let T image. Then, for any t1 ≥ t0 and any x1 ∈
ℝn with ‖x1‖ ≤ 1, we have that for t ≥ t1 + T (∊),
image
Note: t – t1 ≥ T (∊) implies image. Hence, ‖ x(t; t1, x1)‖ ≤ ∊. Therefore,
(7.2) is uniformly asymptotically stable on [t0, ∞).
Conversely, assume that x′ = A(t)x is uniformly asymptotically stable on
[t0, ∞). Then, there exists δ 0 > 0 such that, for each 0 < ∊ < δ0, there exists
T (∊) such that ‖x0‖ < δ 0 and t ≥ t1 + T (∊), then ‖x (t; t1, x0) ‖ < ∊. So, ‖
X(t) X −1(t1) x0‖ < ∊, for t ≥ t1 + T (∊) and ‖x0‖ < δ 0. Thus, image.
But image is an arbitrary vector with norm < 1. Consequently
image
Now x’ = A(t)x is uniformly stable on [t0, ∞), and so by part (2) of this
theorem, ‖X (t) X−1(s) ‖ ≤ K for t0 ≤ s ≤ t. So now let t ≥ t1 ≥ t0 and let the
integer m ≥ 0 be such that t1 + mT (∊) ≤ t < t1 +(m + 1)T (∊). Then,
image
So, from (7.3),
image
where α = −T (∊)−1ln θ (note θ < 1 yields ln θ < 0).
Hence, image, since t – t1 < (m + 1)αT(∊). ☐
For the other part, assume the system is stable and that
Then simply retrace the steps above to conclude that [det X(t)]−1 is bounded
on [t0, ∞). Also by the assumed stability of the system, ‖X(t)‖ is bounded
on [t0, ∞). So, the entries in X (t) are bounded on [t0, ∞), hence the entries in
X−1(t) are bounded on [t0, ∞). Hence, ‖X−1(t)‖ is bounded, thus the system
is strongly stable. ☐
Example 7.6.
(1) The statement of Lemma 7.1 is not “iff”.
Consider x′′ − x = 0. The corresponding system is
image
Solutions of x″− x = 0 are sinh t and cosh t which are unbounded on [0,
∞). So x″ − x = 0 is unstable. But TrA(t) = 0 yields image. Hence the
system is unstable, but image.
(2) For equations of the form x″ + a(t)x = 0, it follows from Lemma 7.2
that if the equation is stable on [t0, ∞), then it is strongly stable. Here
image
hence image. Thus the lemma is satisfied, so we have strong stability
of the system, provided the system was stable to begin with.
Chapter 8
image
Thus z(t) is a solution of the perturbed linear system z′ = A(t)z + h (t, z),
where A(t) = fx(t, x(t)) and
image
Hence, we look at the stability of the zero solution of this reduced
system
image
in evaluating the stability of a solution x(t) of (8.1).
In order to make such considerations, we need to assume x(t) is a
solution of (8.1) on [t0, ∞), and f(t,x) is continuous and has continuous first
partials wrt the components of x on the tube U = {(t, x) | t ≥ t0, ‖x – x(t)‖ <
r}, where r > 0.
With these assumptions, we have
image
on each compact subinterval of [t0, ∞), since fx is uniformly continuous
there, i.e., image uniformly wrt t on each compact subinterval of [t0, ∞).
Now, in the autonomous case, if c ∈ ℂn is such that f(c) = 0, then x(t) ≡ c
is a solution of x′ = f(x). In this case, the perturbed system (8.3) becomes z′
= fx(c)z + h(z), where fx(c) is a constant matrix and image. Hence,
image uniformly for all t (since no t appears).
For yet another observation, if f(t, x) has period ω in t; i.e., f(t +ω, x) =
f(t, x), and if x(t) is a periodic solution of x′ = f(t, x) with period ω on the
real line, and if f(t, x) is continuous and has continuous first partials wrt the
components of x on the tube U, the equation (8.3) becomes z′ = A(t)z + h(t,
z), where A(t) = fx (t, x(t)) is periodic with periodic ω, and h(t + ω, z) = h(t,
z). From the formula and periodicity of h(t, z), we have image
uniformly on each compact subinterval of (–∞, ∞); because of the
periodicity, we need consider the limit only on [0, ω] which is compact.
Thus image uniformly on (-∞, ∞).
Theorem 8.1. Assume that in the equation
image
A(t) is a continuous n × n matrix function on [t0, +∞), and assume that (8.4)
is uniformly stable on [t0, +∞). In the perturbed system
image
assume that f(t,x) is continuous on [t0, ∞) × Br(0), that f (t, 0) ≡ 0 on [t0, ∞),
and that ‖f(t, x)‖ ≤ γ(t)‖x‖ on [t0, ∞) × Br(0), where γ(t) is integrable on [t0,
t1], for each t1 > t0, and image Then there exists L > 1 such that, for each
t1 ≥ t0 and x0 with ‖x0‖ < L−1r, the solution x(t; t1, x0) of (8.4) exists on [t1,
∞) and satisfies ‖x(t; t1, x0)‖ ≤ L‖ x0‖ on [t1, ∞).
Proof. Assume x(t; t1, x0) is a solution of (8.5) with t1 ≥ t0 and ‖x0‖ < L−1r.
We will find an expression for L which satisfies the conditions of the
theorem.
For notation, let x(t) ≡ x(t; t1, x0), and let [t1, ω) be the right maximal
interval of existence of x(t). Thus on [t1, ω), x(t) is a solution of x′ = A(t)x +
f(t, x(t)). Now f(t, x(t)) is just a function of t, hence by the Variation of the
Constants Formula,
image
where X(t) is the solution of X′ = A(t)X, X (t0) = I. Note: X (t)X−1(t1) is the
solution of X′ = A(t)X, X (t1) = I.
Now by the uniform stability of (8.4), there exists K > 0 such that ‖X(t)‖
≤ K and ‖X (t) X−1(s) ‖≤K, t0 ≤ s ≤ t < ∞. Hence, for each image.
Applying the Gronwall Inequality, we have
image
where image. Now this puts a “lid” on x(t). Since [t1, ω) is right maximal,
recall that x(t) → ∂ D as t → ω, where D = [t1, ω) × Br(0), but this condition
puts a “lid” on that. See Figure 8.1.
Now let h > 1 be fixed and set L ≡ max{h, L*}. Assume ‖x0‖ < L−1r; in
fact, let r0 = L‖ x0‖, then, ‖x0‖ = L−1r0 < L−1r. Hence, r0 < r. Then we have
‖x(t)‖ ≤ L*‖x0‖ ≤ L‖x0‖ = r0 on [t1, ω).
We claim that ω = +∞: Choose image such that r0 image < r
and assume ω < ∞. Now the set image (compact) ⊆ [t0, ∞) × Br (0).
Now f(t,x) is continuous on [t0, ∞) × Br(0). By Chapter 2, the solution
approaches the boundary as t → ω, hence must leave the compact set. So,
there exists τ with t1 < τ < ω such that image for all τ < t < ω. Now t, τ
∊ [t1, ω), hence the graph leaves at the “top” of the compact set, i.e., ‖x(t)‖
> image > r0, for τ < t < ω. But ‖x(t)‖ ≤ L‖ x0‖ = r0 < image, for
all t1 ≤ t < ω, which is a contradiction. Thus, ω = +∞.
image
Fig. 8.1 The solution must leave each compact set (as t → ω), i.e., x(t)
leaves the top of the small tube, but this condition puts a lid on that.
Then it follows that ‖x(t)‖ ≤ L‖ x0‖ on [t1, ∞), for all t1 ≥ t0. ☐
Corollary 8.1. If the hypotheses of Theorem 8.1 are satisfied, then the
solution x(t) ≡ 0 of (8.5) is uniformly stable on [t0, ∞).
Corollary 8.2. Assume again the hypotheses of Theorem 8.1 and in addition
that the system x′ = A(t)x is asymptotically stable on [t0, ∞). Then the zero
solution of (8.5) is both uniformly stable and asymptotically stable on [t0,
∞).
Proof. By Corollary 8.1, the zero solution is uniformly stable.
Now let x0 ∊ ℂn and satisfy ‖x0‖ < L −1r where L is as in Theorem 8.1.
Then x(t)= x(t; t0, x0) exists on [t0, ∞) by Theorem 8.1 and
image
where X(t) is the solution of X′ = A(t)X, x(t0) = I. For t ≥ t1 > t0, we can
write
image
Upon applying norm inequalities and using ‖X(t)X−1(s)‖ ≤ K and ‖x(t)‖ ≤
L‖ x0‖ on t0 ≤ s ≤ t < ∞, we have
image
Given ε > 0, since image, we can choose t1 > t0 such that
image
From the asymptotic stability of (8.4), ‖X(t)‖ → 0 as t → ε. Hence, we can
choose t2 > t1 such that, for t ≥ t2, we have
image
Therefore, from above, for t ≥ t2, ‖x(t)‖ < ∊. Hence, limx→∞‖x(t)‖ = 0.
Therefore, the zero solution of (8.5) is asymptotically stable. ☐
Corollary 8.3. Assume that A(t) and B(t) are continuous n × n matrix
functions on [t0, ∞) and assume that the system x′ = A(t)x is uniformly stable
(and asymptotically stable), and assume that image. Then the system
image
is uniformly stable (and asymptotically stable) on [t0, ∞).
Proof. Let f(t, x) = B(t)x and γ(t) = ‖B(t)‖. The conclusion follows from
above. ☐
Example 8.1. (1) The D.E.
image
is uniformly stable on [0, +∞). This is true, because all solutions are linear
combinations of sin t and cos t.
Consider the perturbed system
image
Let
image
then system (8.7) is
image
and the perturbed system (8.8) is given by
image
Since (8.7) is uniformly stable, (8.8) is also uniformly stable on [0, +∞)
if image.
(2) Let us examine the function γ(s) in Theorem 8.1.
Now image is a growth condition on γ as t → ∞. It is also the case
that “γ(t) → 0 as t → ∞ is a type of growth condition.”
Is it possible that image and yet image?
Consider the function e-t in Figure 8.2. Now alter this function by adding
spines that become more and more narrow, but taller, where say γ(n) = n,
yet image. The answer to the question is in the affirmative. See Figure
8.3.
image
(3) In light of (2), let’s consider the following example. That is, we look
at an example of where we replace in Theorem 8.1, image, by γ(t) → 0
as t → ∞. The resulting problem will not be stable.
that is,
Set inline, that is, inline. Hence, for any inline. Thus,
inline. From (8.11), inline, we have
Therefore,
Proof. Let ‖x0‖ > r and x(t) ≡ x (t; t0, x0) be a solution of (8.12).
Let [t0, ω) be its right maximal interval of existence. Then
It follows from inline and Lemma 8.1 that inline. Hence, ‖X(t)‖ ≤ M on
[t0, ∞) for some M > 0. Then
So,
Therefore,
We now make the observation that the integral condition in Theorem 8.2
is stronger than asymptotically stability on the unperturbed system and
weaker than uniformly asymptotically stability on the unperturbed system.
For example, suppose x′ = A(t)x is uniformly asymptotically stable on [t0,
∞). Then ‖X(t)X−1(s)‖ ≤ Ke−α(t−s), t0 ≤ s ≤ t < ∞, where K, α are positive
constants. Then
Exercise 41. Show that the zero solution is both uniformly stable and
asymptotically stable on [1, ∞).
Now consider the equation inline, with L.I. solutions sin t − t cos t,
cos t + t sin t.
The corresponding perturbed system is
and inline on [1, ∞), where
From the solutions sin t − t cos t, cos t + t sin t, clearly the zero solution
of the perturbed system is not asymptotically stable, whereas in the
exercise, the unperturbed system is asymptotically stable. Therefore, the
integral condition in Theorem 8.2 is strictly stronger than asymptotically
stability of the unperturbed system.
Theorem 8.3. Assume that the unperturbed system (8.1), x′ = A(t)x, is
uniformly asymptotically stable on [t0, ∞), and let K > 0, α > 0 be such that
‖X (t) X−1(s)‖ ≤ Ke−α(t−s), for t0 ≤ s ≤ t < ∞, where X(t) is the solution of
Assume f(t, x) is continuous on [t0,∞) × Br(0), for some r > 0, and satisfies
‖f(t, x)‖ ≤ γ‖x‖ for constant γ, with inline. Then every solution x(t; t0, x0)
of (8.2), x′ = A(t)x + f (t, x) with ‖x0‖ < min {K−1r, r} exists on [t0, ∞) and
satisfies
Proof. Let ‖x0‖ < min {K−1r, r} and let [t0, ω) be the maximal interval of
x(t) ≡ x(t; t0, x0), a solution of (8.2). Then, for any t0 ≤ t1 ≤ t < ω,
By our choice of ‖x0‖, we have K‖x0‖ < r, thus ‖x(t)‖ < r, on [t0, ω). It
follows as in previous results that ω = +∞. Hence from above, we have
Corollary 8.4. If the hypotheses of Theorem 8.3 are satisfied, then the
solution x(t) ≡ 0 of (8.2) is uniformly asymptotically stable.
Exercise 42. Prove Corollary 8.4.
Corollary 8.5. Assume that A(t), B(t) are continuous n × n matrix functions
on [t0,∞) and that x′ = A(t)x is uniformly asymptotically stable on [t0, ∞)
and that ‖B(t)‖ → 0, as t → θ.
Then the system x′ = (A(t) + B(t))x is uniformly asymptotically stable on
[t0, ∞); i.e., the zero solution is uniformly asymptotically stable.
A
adjoint system, 94
asymptotically stable, 136
B
basis, 80
C
Cauchy function, 99
characteristic exponents, 123
characteristic multipliers, 123
characteristic polynomial, 104
classical solution, 1
comparison theorem, 65
complete metric space, 84
continuation of a solution, 29
continuity of solutions with respect to parameters, 45
continuous dependence of solutions on initial conditions, 42, 44
Contraction Mapping Principle, 130
D
differentiation with respect to initial conditions, 51
differentiation with respect to parameters, 55
Dini derivatives, 65
E
eigenvalue, 103
eigenvector, 103
F
first variational equation, 57
Floquet’s theorem, 121
fundamental matrix solution, 94
G
Gronwall inequality, 12
H
Hill’s equation, 124
I
initial value problem, 2
inner product, 83
J
Jordan canonical form, 111
K
Kamke convergence theorem, 37
Kamke uniqueness theorem, 73
kinematically similar matrices, 122
L
left maximal interval of existence, 31
linear matrix systems, 88
linear systems, 77
linearly independent, 80
Lipschitz condition, 4
logarithm of a matrix, 116
M
maximal interval of existence, 31
maximal solution, 61
metric space, 84
minimal solution, 61
N
Nagumo uniqueness result, 74
norm, 3
null space, 83
P
Peano existence theorem, 18
periodic matrix, 121
Picard existence theorem, 7, 132
Picard iterates, 8
projection, 156
R
range space, 83
Riemann integrable matrix, 87
right maximal interval of existence, 31
S
series of matrices, 85
similar matrices, 111
simple type eigenvalue, 145
solution space, 80
span, 80
stable, 47, 135, 140
strongly stable, 137
U
uniformly asymptotically stable, 137
uniformly stable, 136
V
variation of constants formula, 95