Trends in Abstract and Applied Analysis - John R Graef - Johnny Henderson - Lingju Kong and Xueyan Sher

Download as pdf or txt
Download as pdf or txt
You are on page 1of 173

Ordinary Differential

Equations and Boundary


Value Problems
Volume I: Advanced Ordinary
Differential Equations
TRENDS IN ABSTRACT AND APPLIED ANALYSIS

ISSN: 2424-8746

John R. Graef
Series Editor:
The University of Tennessee at Chattanooga, USA

This series will provide state of the art results and applications on current
topics in the broad area of Mathematical Analysis. Of a more focused
nature than what is usually found in standard textbooks, these volumes will
provide researchers and graduate students a path to the research frontiers in
an easily accessible manner. In addition to being useful for individual study,
they will also be appropriate for use in graduate and advanced
undergraduate courses and research seminars. The volumes in this series
will not only be of interest to mathematicians but also to scientists in other
areas. For more information, please go to
http://www.worldscientific.com/series/taaa

Published

Ordinary Differential Equations and Boundary Value Problems


Vol. Volume I: Advanced Ordinary Differential Equations
7 by John R. Graef, Johnny Henderson, Lingju Kong & Xueyan Sherry
Liu
Vol. The Strong Nonlinear Limit-Point/Limit-Circle Problem
6 by Miroslav Bartušek & John R. Graef
Higher Order Boundary Value Problems on Unbounded Domains:
Vol.
Types of Solutions, Functional Problems and Applications
5
by Feliz Manuel Minhós & Hugo Carrasco
Quantum Calculus:
Vol.
New Concepts, Impulsive IVPs and BVPs, Inequalities
4
by Bashir Ahmad, Sotiris Ntouyas & Jessada Tariboon
Solutions of Nonlinear Differential Equations:
Vol.
Existence Results via the Variational Approach
3
by Lin Li & Shu-Zhi Song
Vol. Nonlinear Interpolation and Boundary Value Problems
2 by Paul W. Eloe & Johnny Henderson
Multiple Solutions of Boundary Value Problems:
Vol.
A Variational Approach
1
by John R. Graef & Lingju Kong

More information on this series can be found at


http://www.worldscientific.com/series/taaa
Ordinary Differential
Equations and Boundary
Value Problems

Volume I: Advanced Ordinary Differential Equations

John R Graef

University of Tennessee at Chattanooga, USA

Johnny Henderson

Baylor University, USA

Lingju Kong

University of Tennessee at Chattanooga, USA

Xueyan Sherry Liu

St Jude Children’s Research Hospital, USA


Published by

World Scientific Publishing Co. Pte. Ltd.

5 Toh Tuck Link, Singapore 596224

USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601

UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-Publication Data

Names: Graef, John R., 1942– author.

Title: Ordinary differential equations and boundary value problems / by


John R. Graef (University of Tennessee at Chattanooga, USA) [and three
others].

Description: New Jersey : World Scientific, 2018– | Series: Trends in


abstract and applied analysis ; volume 7 | Includes bibliographical
references and index.

Contents: volume 1. Advanced ordinary differential equations

Identifiers: LCCN 2017060286 | ISBN 9789813236455 (hc : alk. paper : v.


1)

Subjects: LCSH: Differential equations. | Boundary value problems.

Classification: LCC QA372 .O7218 2018 | DDC 515/.352--dc23

LC record available at https://lccn.loc.gov/2017060286

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library.

Copyright © 2018 by World Scientific Publishing Co. Pte. Ltd.


All rights reserved. This book, or parts thereof, may not be reproduced in
any form or by any means, electronic or mechanical, including
photocopying, recording or any information storage and retrieval system
now known or to be invented, without written permission from the
publisher.

For photocopying of material in this volume, please pay a copying fee


through the Copyright Clearance Center, Inc., 222 Rosewood Drive,
Danvers, MA 01923, USA. In this case permission to photocopy is not
required from the publisher.

For any available supplementary material, please visit

http://www.worldscientific.com/worldscibooks/10.1142/10888#t=suppl

Desk Editors: V. Vishnu Mohan/Kwong Lai Fun

Typeset by Stallion Press

Email: [email protected]

Printed in Singapore
Dedication
John Graef dedicates this work to his wife Frances and his Ph.D. advisor T.
A. Burton.

Johnny Henderson’s dedication first is to his friend, Allan C. Peterson, and


second is to his cadre of doctoral students: Jeffrey Allen Ehme, Anjali
Datta, Eric Roger Kaufmann, Kuo-Chuan William Yin, Rena Denis
Taunton Reid, Feng-Chun Charlie Fang, Tuwaner Mae Hudson Lamar,
Susan Denese Stenger Lauer, John Marcus Davis, Alvina M. Johnson
Atkinson, Nickolai Kosmatov, Kathleen Marie Goeden Fick, Ana Maria
Maturana Tameru, Parmjeet Kaur Singh Cobb, Basant Kumar Karna, Ding
Ma, Michael Jeffery Gray, Mariette R. Maroun, John Emery Ehrke, Curtis
John Kunkel, Britney Jill Hopkins, Jeffrey Wayne Lyons, Jeffrey Thomas
Neugebauer, Xueyan Sherry Liu, Shawn Michael Sutherland, Charles
Franklin Nelms, Jr. and Brian Christopher Pennington.

Lingju Kong dedicates this book to those mathematicians who influenced


his research.

Xueyan Liu’s dedication is to her Ph.D. advisor Johnny Henderson, MS


advisor Binggen Zhang, postdoctoral supervisor Hui Zhang, colleague Deo
Kumar Srivastava, and former colleagues John Graef, Lingju Kong, Min
Wang, Cuilan Gao, Jin Wang, Andrew Ledoan, Xuhua Liu, academic
friends Richard Avery, Douglas Anderson, Yu Tian, and academic cousins
Jeffrey Thomas Neugebauer, Jeffrey Wayne Lyons, and Shawn Michael
Sutherland.
Preface
In this work we give a treatment of the theory of ordinary differential
equations (ODEs) that is appropriate to use for a first course at the graduate
level as well as for individual study. Written in a somewhat chatty way, we
hope the reader will find it to be a captivating introduction to this
fascinating area of study. A number of nonroutine exercises are dispersed
throughout the book to help the reader explore and expand his knowledge.

We begin with a study of initial value problems for systems of differential


equations including the Picard and Peano existence theorems. The
continuability of solutions, their continuous dependence on initial
conditions, and their continuous dependence with respect to parameters are
presented in detail. This is followed by a chapter on the differentiability of
solutions with respect to initial conditions and with respect to parameters.
Comparison results and differential inequalities come next.

Linear systems of differential equations are treated in detail as is


appropriate for a study of ODEs at this level. We believe that just the right
amount of basic properties of matrices is introduced to facilitate the study
matrix systems and especially those with constant coefficients. Floquet
theory for linear periodic systems is presented and used to study
nonhomogeneous linear systems.

Stability theory of first order and vector linear systems are considered. The
relationships between stability of solutions, uniform stability, asymptotic
stability, uniformly asymptotic stability, and strong stability are discussed
and illustrated with examples. A section on the stability of vector linear
systems is included. The book concludes with a chapter on perturbed
systems of ODEs.

A second volume devoted to boundary value problems is to follow. It can


be used as a “stand alone” work or as a natural sequel to what is presented
here.
John R. Graef
Johnny Henderson
Lingju Kong
Xueyan “Sherry” Liu
Contents

Preface

1. Systems of Differential Equations


1.1 Introduction
1.2 The Initial Value Problem (IVP)
1.3 The Picard Existence Theorem
1.4 The Peano Existence Theorem

2. Continuation of Solutions and Maximal Intervals of Existence


2.1 Continuation of Solutions
2.2 Kamke Convergence Theorem
2.3 Continuous Dependence of Solutions on Initial Conditions
2.4 Continuity of Solutions wrt Parameters

3. Smooth Dependence on Initial Conditions and Smooth


Dependence on Parameters
3.1 Differentiation of Solutions wrt Initial Conditions
3.2 Differentiation of Solutions wrt Parameters
3.3 Maximal Solutions and Minimal Solutions

4. Some Comparison Theorems and Differential Inequalities


4.1 Comparison Theorems and Differential Inequalities
4.2 Kamke Uniqueness Theorem

5. Linear Systems of Differential Equations


5.1 Linear Systems of Differential Equations
5.2 Some Properties of Matrices
5.3 Infinite Series of Matrices and Matrix-Valued Functions
5.4 Linear Matrix System
5.5 Higher Order Differential Equations
5.6 Systems of Equations with Constant Coefficient Matrices
5.7 The Logarithm of a Matrix

6. Periodic Linear Systems and Floquet Theory


6.1 Periodic Homogeneous Linear Systems and Floquet Theory
6.2 Periodic Nonhomogeneous Linear Systems and Floquet Theory

7. Stability Theory
7.1 Stability of First Order Systems
7.2 Stability of Vector Linear Systems

8. Perturbed Systems and More on Existence of Periodic


Solutions
8.1 Perturbed Linear Systems

Bibliography

Index
Chapter 1
Systems of Differential Equations
1.1Introduction

We shall be concerned with solutions of systems of ordinary differential


equations (ODE’s). Let f(t, x) be defined on some set D ⊆ ℝ× ℝn, in the
sense that t ∈ ℝ and x ∈ ℝn (i.e., x is an n-dimensional vector). We shall
record x as a column vector, so

image

Moreover, let f : D → ℝn, so that f is also a column vector, and in


particular,

image

Definition 1.1 (Classical Solution). An n-dimensional vector-valued


function, φ(t), is said to be a solution of the differential equation (D.E.) x′ =
f(t, x) on an interval I in case:

(1)φ ∈ C(1)(I),

(2)(t, φ (t)) ∈ D, for all t ∈ I,

(3)φ′(t) ≡ f(t, φ(t)) on I.

Now in the above definition,

image

The solution described in the definition above is called a classical solution.


Although we will not be concerned with such, we can discuss a solution in
a more general sense (the almost everywhere (a.e.) or Lebesgue sense). In
that case, such a solution φ of x′ = f(t, x) would satisfy:

(1)φ is absolutely continuous on I,

(2)(t, φ(t)) ∈ D, for all t ∈ I,

(3)φ′(t) = f(t, φ(t)) a.e. on I.

Exercise 1. Let f(t, x) be continuous on D ⊆ ℝ × ℝn and let φ(t) be an a.e.


solution on an interval I. Prove φ(t) is a classical solution.

We now shall begin our work towards the local existence theorems.

1.2The Initial Value Problem (IVP)

Given f(t, x) defined on D ⊆ ℝ × ℝn and (t0, x0) ∈ D, we seek an interval I


and an n-vector function φ(t) such that t0 ∈ I, φ(t0) = x0, φ ∈ C(1)(I), (t,
φ(t)) ∈ D on I, and φ′(t) ≡ f(t, φ(t)) on I. Such a vector-valued function φ(t)
is said to be a solution of the initial value problem (IVP) on I and we
usually indicate this IVP by

image

Consider now the relationship between a solution of IVP (1.1) and a


solution of an integral equation. Assume that f(t, x) is continuous on D and
that φ(t) is a solution of IVP (1.1) on the interval I. Then, for all t ∈ I, we
have φ′(s) ≡ f(s, φ(s)) on the interval with endpoints t0 and t. By the
continuity of f(s, φ(s)), we have that φ′(s) is integrable over the interval
with endpoints t0 and t, and

image

which yields

image
that is,

image

In particular, if φ ∈ C(1)(I) and is also a solution of IVP (1.1) on I, then φ


∈ C(I) and is a solution of the integral equation (1.2) on I.

Conversely, if φ ∈ C(I), Graph(t, φ(t)) ⊆ D, and φ satisfies (1.2) on I, then


φ(t0) = x0; also, since f ∈ C(D), we have that f(t, φ(t)) ∈ C(I), and
consequently image ds is continuously differentiable. Thus,

image

Thus, φ is a solution of IVP (1.1).

Before discussing the local existence theorems, we will briefly consider


some properties of “norms”.

Definition 1.2 Let V be a vector space. Then a norm ‖·‖ : V → ℝ has the
properties:

(1)‖v‖ ≥ 0, and ‖v‖ = 0 if and only if v = 0,

(2)‖αv‖ = |α|‖v‖, for all α ∈ ℝ, v ∈ V,

(3)‖v1 + v2‖ ≤ ‖v1‖ + ‖v2‖, for all v1, v2 ∈ V.

On ℝn, there are several norms. Suppose

image

Some convenient norms are given by ‖x‖ = max1≤i≤n |xi|, or

image

On several occasions, we will also be dealing with linear transformations A


: ℝn → ℝn, which can be expressed by an n × n matrix whose action is
determined by a set of basis elements. We will use the standard basis
image

and denote

image

with matrix multiplication being the action of A on an n-vector.

Definition 1.3. Associated with such a transformation A is a norm defined


by

image

These are equivalent definitions.

Exercise 2. Take ‖x‖ = max1≤i≤n|xi|. Show that if A : ℝn → ℝn is as above,


then its induced norm is given by

image

1.3The Picard Existence Theorem

We introduce some background material for this classical result.

Definition 1.4. The function f(t, x) is said to satisfy a Lipschitz condition


with respect to (wrt for short) x on D ⊆ ℝ × ℝn in case there is a constant
K > 0 such that

image

image

Fig. 1.1 Rectangles Q1 and Q2.

Definition 1.5. f(t, x) is said to satisfy a local Lipschitz condition wrt x on


D ⊆ ℝ × ℝn in case, corresponding to every “rectangle” Q = {(t, x)| |t – t0|
≤ a, ‖x – x0‖ ≤ b} ⊆ D, there is a positive KQ > 0 such that
image

Note: As one goes from one rectangle to another, the constant KQ varies,
i.e., KQ1 is not necessarily equal to KQ2. See Figure 1.1.

Lemma 1.1. Let f(t, x) and image, be continuous on an open set D ⊆ ℝ×


ℝn. Then f satisfies a local Lipschitz condition wrt x on D.

Proof. Let fx(t, x) denote the Jacobian matrix of f(t, x); i.e.,

image

Now let Q = {(t, x) | |t – t0 | ≤ a, ‖x – x0 ‖ ≤ b} ⊆ D be a rectangle in D, and


let KQ = max(t, x) ∈ Q ‖ fx (t, x)‖. Note here that, by the Exercise 2,

image

We claim that f satisfies a Lipschitz condition wrt x on Q with Lipschitz


coefficient KQ. (By continuity, it is true that KQ exists.)

For the claim, let (t, x), (t, y) ∈ Q and set z(s) = (1 – s)x + sy, 0 ≤ s ≤ 1. We
will first show that (t, z(s)) ∈ Q, for all 0 ≤ s ≤ 1. Consider

image

Since |t – t0| ≤ a by choice of t, we have (t, z(s)) ∈ Q, 0 ≤ s ≤ 1.

(One might note here that we have shown Q to be a convex set wrt x.)

Above where we have z(s) = (1 – s)x + sy, we mean of course that z1(s) = (1
– s)x1 + sy1, z2(s) = (1 – s)x2 + sy2, …, zn(s) = (1 – s)xn + syn, since x, y ∈
ℝn.

Thus, recalling the chain rule, image, etc., so that

image
Hence,

image

which implies

image

But z(1) = y and z(0) = x, so f(t, y) – f(t, x) = image. [This is the Mean
Value Theorem for vector-valued functions.]

Finally, we have

image

Therefore, f satisfies a local Lipschitz condition wrt x. ☐

Theorem 1.1 (Picard Existence Theorem). Let f(t, x) be continuous and


satisfy a local Lipschitz condition wrt x on an open set D ⊆ ℝ × ℝn. Let
(t0, x0) ∈ D and assume that the numbers a, b > 0 are such that the
rectangle Q = {(t, x) | |t – t0 | ≤ a, ‖ x – x0‖ ≤ b} ⊆ D. Let M =
max(t,x)∈Q‖f(t, x)‖ and let image.

Then the IVP,

image

has a unique solution on [t0 – α, t0 + α].

Proof. It suffices to show that there is a unique continuous n-vector


function φ(t) on [t0 – α, t0 + α] such that (t, φ(t)) ∈ Q ⊆ D, for |t – t0| ≤ α,
and image.

Before proceeding with the proof, let’s consider an example.

Example 1.1 Let x′ = t2 + x2 = f(t, x). Then, f is continuous on D = the (t,


x)-plane and f satisfies a local Lipschitz condition on the plane. Let us try to
calculate the value of α.

Consider the IVP, x′ = t2 + x2, x(0) = 0, and let a, b > 0 be fixed. Let the
rectangle Q = {(t, x) | |t| ≤ a, |x| ≤ b} ⊆ D = the (t, x)-plane. See Figure 1.2.

image

Fig. 1.2 Rectangle Q.

Now the max(t,x)∈Qf(t, x) occurs at the corners of Q and thus M = a2 + b2,


so that α = min image.

For this example, we can determine the best possible (maximum) α. For
fixed a, we find that the maximum of image as a function of b happens
when b = a, so that max image. Thus α = min image.

To find the maximum α, consider the decreasing function image and the
increasing function s = a. The maximum α occurs when image, or
image, we have image. See Figure 1.3.

image

Fig. 1.3 Graphs of s = a and image.

Therefore, by the Picard Theorem, the IVP has a unique solution on


image

We now continue the proof by showing that image has a unique


continuous solution on [t0 – α, t0 + α]. Define a sequence image, called
the sequence of Picard iterates, by

image

We shall first prove that this sequence is well-defined by showing that (t,
xn(t)) ∈ Q, for |t – t0| ≤ α and n ≥ 0. Clearly (t, x0(t)) = (t, x0) ∈ Q, for |t –
t0| ≤ α ≤ a. We proceed by induction.
Assume that (t, xn(t)) ∈ Q, for |t – t0| ≤ α and 0 ≤ n ≤ k – 1. Consider

image

Then

image

By the induction hypothesis, (s, xk−1(s)) ∈ Q implies ‖f(s, xk − 1(s))‖ ≤ M,


thus

image

So, (t, xk (t)) ∈ Q, for |t – t0| ≤ α. Therefore, by induction, we have (t, xn(t))
∈ Q, for |t – t0 | ≤ α and n ≥ 0.

Now, let K ≥ 0 be the Lipschitz coefficient for f on Q. Then we have


image, and

image

We claim that image on [t0 – α, t0 + α] for n ≥ 0.

Again, we argue inductively. From above, it follows that the assertion is


true for n = 0, 1. Assume the assertion is true for 0 ≤ n = k – 1 and consider

image

Therefore, by induction, image and n ≥ 0. Thus, image 0.

Let us consider the infinite series given by the telescoping sum

image

whose nst partial sum is Sn−1 = x0 + x1 – x0 + x2 – x1 + ⃛ + xn–1 – xn–2 + xn –


xn–1 = xn(t).
Now, image and we also know that image converges (to what?).

Therefore, by the Weierstrass M-test, (1.3) converges uniformly on [t0 – α,


t0 + α ]; i.e., the sequence of partial sums {Sn} = {xn (t)} converges
uniformly on [t0 – α, t0 + α]. So, let limx → ∞ xn (t) := φ (t). Then, φ(t) is
continuous.

Now, since Q is a closed and bounded subset of ℝ × ℝn, Q is compact.


Hence f is uniformly continuous on Q, and as a consequence, {f(t, xn (t))}
converges uniformly on [t0 – α, t0 + α]. Hence, from the fact that image,
we can take limits as follows

image

which is because of the uniform convergence of {f(t, xn(t))}. Hence,

image

Therefore, φ satisfies the appropriate integral equation and is consequently


a solution of the desired IVP on [t0 – α, t0 + α ].

Before establishing the uniqueness of φ, we consider a number of asides.

Corollary 1.1 In the iterative sequence {xn(t)}, the error is bounded by


image.

Proof. Since φ(t) = limn→ ∞ xn(t), we have

image

Now

image

and so

image
Example 1.2 Let us compute this error for the IVP: x′ = t2 + x2, x(0) = 0.

Take D = ℝ × ℝ. Previously, we found that a unique solution exists on


image. In this case our local Lipschitz coefficient is determined by
image, which yields

image

where we are taking

image

from the maximizing process for α. Also, image. Thus, for any n = 0,1,2,
…,

image

Exercise 3. For each of the following, determine the best possible α, the
corresponding Lipschitz coefficient K, and calculate the first 3 Picard
iterates.

(1) y′ = y3, y(0) = 2.

(2)y′ = x + xy2, y(0) = 3 (α = 0.28).

(3) image

Before completing the proof of Theorem 1.1, we consider the following


theorem first:

Theorem 1.2 (Gronwall Inequality). Let f, g be continuous non-negative


real-valued functions defined on an interval I ⊆ ℝ. Let K(t) be a
continuous nonnegative function on I, and assume further that, there is a
point t0 ∈ I such that

image

Then,
image

Proof. Case I: Let t ∈ I with t > t0 be fixed. Then by (1.4), image, for all
image then ψ is differentiable and ψ′(u) = K(u)f(u).

Since K(u) ≥ 0, we have

image

which is

image

or

image

Rewriting as ψ′(u) – K(u)ψ(u) ≤ K(u)g(u), and then multiplying both sides


of this latter inequality by the integrating factor image, we have

image

or

image

Now integrate over [t0, t] and recall both sides are nonnegative, and we
have

image

which gives

image

so that

image
Therefore, image, for all t ∊ I such that t > t0.

Exercise 4. Verify equation (1.5) when t < t0.

This completes the proof of this theorem. ☐

Corollary 1.2 If the hypotheses of Theorem 1.2 are satisfied, but g(t) ≡ 0
on I, then f ≡ 0 on I.

Proof. By (1.5) in Theorem 1.2, we have

image

Since f is nonnegative on I, we have f ≡ 0 on I.☐

Proof of Theorem 1.1 continued: We now establish the uniqueness of the


solution φ to the IVP (1.1) on [t0 – α, t0 + α]. What we mean by this is as
follows: Let φ(t) be the solution we obtained above on [t0 – α, t0 + α], and
let ψ (t) be a solution to the same IVP, x′ = f (t, x), x(t0) = x0, on an interval
J containing t0. Then we will show that φ (t) ≡ ψ (t) on [t0 – α, t0 + α] ∩ J.

For such a solution ψ (t), we claim that ‖ψ(t) – x0 ‖ ≤ b for all t ∈[t0 – α, t0
+ α ]∩ J.

Assume this is not true, i.e., (t, ψ (t)) ∉ Q for some t ∈[t0 – α, t0 + α] ∩ J.
Then ‖ψ(t) – x0}‖ > b for some t ∈ [t0 – α, t0 + α] ∩ J. However, x0 = ψ
(t0), and hence by continuity, there exists τ ∈ (t0 – α, t0 + α) ∩ J such that
‖ψ(τ) – x0‖ = b and ‖ψ(t) – x0‖ < b on [t0, τ) or on (τ, t0].

Now image and so

image

image

Fig 1.4 The solution ψ(t).


which is a contradiction. Therefore, (t, ψ (t)) ∈ Q for all t ∈ [t0 – α, t0 +
α]∩J. Hence for any such t, we have image, and so,

image

which yields

image

By Corollary 1.2 to Theorem 1.2, it follows that ‖φ(t) – ψ (t)‖ = 0, that is,
φ(t) = ψ(t) for all t ∈ [t0 – α, t0 + α] ∩ J.

Therefore, φ is the unique solution and the proof of Theorem 1.1 is


complete. ☐

Theorem 1.3. Let f(t, x) be continuous on D = I × ℝn, where I is an interval


of the reals. Assume that f satisfies the Lipschitz condition wrt x on each
subset in ℝ × ℝn of the form [a, b] × ℝn, where [a, b] (compact) ⊆ I. Then,
for any t0 ∈ I and any x0 ∈ ℝn, the IVP x′ = f (t, x), x(t0) = x0 has a unique
solution on I.

Proof. Let a fixed IVP as specified be given and let τ ∈ I be fixed, but
arbitrary. Choose a compact interval [a, b] ⊆ I such that t0, τ ∈ [a, b].

Define a sequence {xn(t)} by

image

Then,

image

If M = maxa ≤ t ≤ b‖f(t, x0)‖, then ‖x1(t) – x0(t)‖ ≤ M|t – t0 | on [a, b].

image

Fig. 1.5 The sequence xn(t).


As in Theorem 1.1, we have image, for all n ≥ 0, t ∈ [a, b], where K is
the Lipschitz coefficient of f on [a, b] × ℝn. In particular, image. As in
Theorem 1.1, the sequence {xn(t)} will converge uniformly to a solution
φ(t) of the IVP on [a, b]. Since τ was chosen arbitrarily, φ is defined on the
entire interval I. Moreover, φ is unique from the Gronwall inequality, i.e.,
given another solution ψ (t) of the IVP and an arbitrary compact [a, b] ⊆ I,
with t0 ∈ image. By Corollary 1.2 to Theorem 1.2, ‖φ(t) – ψ(t)‖ ≡ 0 on
[a, b]. Hence, φ(t) ≡ ψ(t) on [a, b]. Since [a, b] was arbitrary, φ(t) ≡ ψ(t) on
I.☐

Example 1.3 (Where Theorem 1.3 is applicable). Consider the linear


system

image

where aij, hi, 1 ≤ i, j ≤ n, are real-valued continuous functions defined on an


interval I ⊆ ℝ.

Rewrite the system as

image

Clearly f is continuous on I × ℝn. If [a, b] (compact) ⊆ I, consider ‖f(t, x) –


f(t, y)‖ = ‖A(t)(x – y)‖ ≤ K‖x – y‖, for (t, x), (t, y) ∈ [a, b] × ℝn, where K =
maxt∈[a, b]‖A(t)‖. If we take vector norm ‖x‖ = max1≤ j≤n|xj|, then

image

The hypotheses of Theorem 1.3 are satisfied, thus the theorem can be
applied.

Before we consider a second example, we will look at an nth order scalar


equation and its translation to a first order system.

Consider

image
where f and x are real-valued functions. Define

image

or

image

Thus, we see that if x(t) ∈ C(n)(I) is a solution of (1.6) on I, then

image

is a solution of (1.7) or (1.8) on I with y ∈ C(1) (I).

Conversely, if y(t) ∈ C(1)(I) is a solution of (1.7) or (1.8) on I, then y1(t) is


a solution of (1.6) on I.

We can write an IVP for (1.8) as follows:

image

This is equivalent to the IVP for (1.6):

image

Example 1.4 We now consider a second example where Theorem 1.3 is


applicable. Suppose we have the linear equation image, where pi, h are
real-valued and continuous on an interval I ⊆ ℝ.

Translate to the first order system by letting y1 = x:

image

which we rewrite as y′ = A(t)y + image(t), where

image

Referring to Example 1.3, for any t0 ∈ I and any ci, 1 ≤ i ≤ n, the IVP
image

has a unique solution by Theorem 1.3. Equivalently, the IVP

image

has a unique solution x(t) ∈ C(n)(I).

Exercise 5. For the following second order scalar equations, show that
Theorem 1.3 applies to yield unique C(2) solutions on ℝ, and calculate their
solutions.

(1) image

(2) image

1.4The Peano Existence Theorem

In the Picard existence theorem, the fact that f(t, x) was locally Lipschitz
was instrumented in establishing the uniqueness as well as the existence of
the solution φ.

Theorem 1.4 (Peano Existence Theorem). Let f(t, x) be continuous on the


open set D ⊆ ℝ × ℝn, let (t0, x0) ∈ D, and let a, b > 0 be such that the
rectangle Q = {(t, x) | ‖t – t0‖ ≤ a, ‖x – x0‖ ≤ b} ⊆ D. Let M =
max(t,x)∈Q‖f(t, x)‖ and α = min image. Then the IVP

image

has a solution on [t0 – α, t0 + α].

Proof. We will prove the existence of a solution on [t0, t0 + α]. In a similar


way, one can prove the existence of a solution on [t0 – α, t0]. Then the
solutions fit together at t0 to give a solution on [t0 – α, t0 + α].
We now construct a sequence of approximate solutions which consists of
polygonal lines, as follows:

For any integer n ≥ 1, partition [t0, t0 + α] into 2n equal subintervals with


partition points image.

Define xn(t) by

image

where

image

image

Fig. 1.6 The graph of xn(t).

The remainder of the proof consists of showing that a subsequence of


{xn(t)} converges to a solution of the IVP on [t0, t0 + α].

Claim 1. For all n ≥ 1, {(t, xn(t))} ⊆ Q, t0 ≤ t ≤ t0 + α; i.e., for all n ≥ 1,


‖xn(t) – x0‖ ≤ b on [t0, t0 + α].

Proof of Claim 1. On the first subinterval, t0 ≤ t ≤ t1, we have that ‖xn(t) –


x0‖ = ‖f(t0, x0) (t – t0)‖ ≤ M(t1 – t0) ≤ Mα ≤ b. Thus the claim is satisfied
on the first subinterval.

The proof proceeds by induction on j. Thus, assume now that ‖xn(t) – x0‖ ≤
b for t0 ≤ t ≤ tj and consider t ∈ [tj, tj+1].

Well,

image
Therefore, Claim 1 is true for t0 ≤ t ≤ tj+1 and hence the claim follows by
induction. ☐

Claim 2. For all n ≥ 1 and any t0 ≤ τ ≤ t ≤ t0 + α, ‖xn(t) – xn(τ)‖ ≤ M|t – τ|;


i.e., xn satisfies a Lipschitz condition.

Proof Claim 2. (a) Assume tj ≤ τ ≤ t ≤ tj+1, for some j. From (1.9), we have

So,

(b) Assume tj ≤ τ ≤ tj+1 ≤ ⋯ ≤ tk ≤ t ≤ tk+1, for some j and k.

Then

Therefore, Claim 2 is true.☐

From Claim 1, our sequence of polygonal lines is uniformly bounded; i.e.,


‖xn(t)‖ ≤ ‖x0‖ + b on [t0, t0 + α]. From Claim 2, the sequence is
equicontinuous on [t0, t0 + α]. Consequently, by the Ascoli-Arzelá
Theorem, there exists a subsequence which converges uniformly on
[t0, t0+α], say
Note here that since f(t, x) is continuous on Q (compact) ⊆ ℝ × ℝn, we
have that f(t, x) is uniformly continuous on Q. Hence, for each ε > 0, there
exists a δ > 0 such that, for (t, x), (τ, y) ∈ Q with |t – τ | < δ, ‖x – y‖ < δ we
have ‖f(t, x) – f(τ, y) ‖ < ∊. It follows that, if nk is large enough, say max
, then for t arbitrary, tj ≤ t ≤ tj+1, we have
and

Hence, for all tj ≤ t ≤ tj+1 and j = 0, 1, …, 2n,

But by (1.9),

Therefore,

From the construction of our polygonal line approximations, for t0 ≤ t ≤ t0


+ α, we have

which yields
But limnk→∞xnk (t) = φ (t) uniformly, which implies

So,

Therefore, , and φ(t) is a solution to the IVP.☐

Example 1.5 This example illustrates that in the absence of a Lipschitz


condition, solutions are not necessarily unique.

Consider the scalar equation . Consider t ≥ t0 and the case


x(t) ≥ 0:

is a solution. Yet x(t) ≡ 0 is also a solution.

Fig 1.7 Nonunique solutions.


Exercise 6. Let f(t, x): ℝ× ℝ → ℝ be defined by . Show that f
does not satisfy a Lipschitz condition on any rectangle of the form Q = {(t,
x) | |t – t0| ≤ a, |x – 0| ≤ b}.

Remark 1.1. The IVP: has an infinite number of


solutions.

Exercise 7. Verify Remark 1.1, by finding an infinite number of solutions


of the IVP: .

Corollary 1.3 (To Theorem 1.4). Let D ⊆ ℝ × ℝn be open and let f: D →


ℝn be continuous. Then if K ⊆ D is compact, then there exists a δK > 0
such that for all (t0, x0) ∈ K, the IVP x′ = f(t, x), x(t0) = x0 has a solution on
[t0 – δK, t0 + δK].

Proof. Let η = 1, if D = ℝ × ℝn, and let if D ≠ ℝ× ℝn. Here d


is given as follows: If (t1, x1),(t2, x2) ∈ ℝ × ℝn, then d((t1, x1), (t2, x2)) =
|t1 – t2| + ‖x1 – x2‖.

Let H = {(t, x)|d((t, x), K) ≤ η}. Then H(compact) ⊆ D. Let M =


max(t,x)∈H‖f‖ and let (t0, x0) ∈ K. Then define
.

Fig 1.8 K ⊆ H ⊆ D.
Note: If , then .

Since (t0, x0) ∈ K, we have and so . Thus, Q ⊆ H ⊂


D. (Note: Q is the type of “Q-rectangle” in Theorem 1.4, where

Let . Then δK corresponds to the number α in


Theorem 1.4; consequently, the hypotheses of Theorem 1.4 are satisfied
and it follows that the IVP

has a solution on [t0 – δK, t0 + δK]. ☐

Before our final theorem in this section, we will look at some examples of
Picard iterates of functions not satisfying Lipschitz conditions. In the first
case, it will be true that the Picard iterates converge to desired solutions,
whereas, in the second case the Picard iterates will not converge.

Example 1.6 Consider the IVP

Obviously, does not satisfy a Lipschitz condition on any


rectangle containing x = 0. The Picard iterates are given by

Hence, all Picard iterates are zero, which converge to a solution φ ≡ 0.

Note that in the proof of the Picard Theorem, one doesn’t have to set x0(t) =
x0. The only real restriction is that {(t, x0(t))} ⊆ Q.
Hence, again consider the above IVP: . We can define x0
(t) = tα, α > 0 fixed, on [0, ∞):

Exercise 8. Show that where en(α) = and

Then , and if b = limn→∞


Cn(α), then implying .

Hence, which is also a solution of the IVP.

Thus, we have an example of an IVP with no Lipschitz condition, yet the


Picard iterates converge to two of the solutions, (recall, no unique solution).

Example 1.7 For this example, we consider a problem which does have a
unique solution, however there is no Lipschitz condition — moreover, the
Picard iterates do not converge. Consider

where f(t, x): [0,1]× ℝ → ℝ is given by

All the pieces are continuous except possibly at t = 0. The pieces fit
together at each break for x. Now near t = 0, limt → 0 f(t, x) = 0 = f(0, x) in
each region implies that f is continuous at t = 0 also.

image
Fig. 1.9 The function f(t, x).

Therefore, f is continuous on [0, 1] × ℝ. Notice that for a fixed value of t ∈


[0, 1], f(t, x) is a nonincreasing function of x.

Now let’s look at the Picard iterates. Let image. Then,

image

Therefore, the Picard iterates do not converge to a solution.

Before one last theorem in this section, we will develop a computational


aid:

Suppose f(t, x) is continuous on a slab [t0, t0 + α] × ℝn and suppose we


have an initial value problem x′ = f(t, x), x(t0) = x0.

Suppose, moreover, that x(t) and y(t) are both solutions of the IVP on [t0, t0
+ α].

Let the inner product of two vectors α, β ∈ ℝn be denoted

image

Then, on [t0, t0 + α],

image

Note: h(t0) = x(t0) – y(t0), x(t0) – y(t0) = x0 – x0, x0 – x0 = 0. Hence,


h(t) satisfies the IVP

image

If h′(t) ≤ 0 on [t0, t0 + α], it follows that h(t) ≡ 0 on [t0, t0 + α]. Thus, x(t) ≡
y(t), if h′(t) ≤ 0 on [t0, t0 + α].
Theorem 1.5. Let f(t, x) be continuous on the set Q = {(t, x) |t0 ≤ t ≤ t0 + α,
‖x – x0‖ ≤ b. Assume that, for any (t, x1), (t, x2) ∈ Q,

image

Then, if x(t) and y(t) are solutions of the IVP x′ = f(t, x), x(t0) = x0, each
with its graph contained in Q for t0 ≤ t ≤ t0 + α, it follows that x(t) ≡ y(t) on
[t0, t0 + α ].

Proof. Since {(t, x(t))}, {(t, y(t))} ⊆ Q and since x(t) and y(t) are solutions
of the IVP, we have

image

The middle expression is precisely image from above. Moreover, h(t0) =


0 and so x(t) ≡ y(t) on [t0, t0 + α].

Note: When x′ = f(t, x) is a scalar equation, (i.e., f(t, x): ℝ × ℝ → ℝ), the
condition given in the theorem becomes (x1 – x2)(f(t, x1) – f(t, x2)) ≤ 0
which is equivalent to saying that f(t, x) is nonincreasing in x, for all fixed t.

Exercise 9. In the preceding Example 1.7, find the unique solution of the
IVP on [0, 1]. Tell why the solution is unique.

Exercise 10. State and prove a theorem corresponding to Theorem 1.5 on


[t0 – α, t0].
Chapter 2

Continuation of Solutions and


Maximal Intervals of Existence

2.1 Continuation of Solutions

Definition 2.1. Let x(t) be a solution of x′ = f(t,x) on an interval I. Then a


solution y(t) is said to be a continuation (or extension) of x(t) in case y(t) is
a solution on an interval J such that I ⊊ J and x(t) ≡ y(t) on I.

Theorem 2.1. Let f(t, x) be continuous on D ⊆ ℝ × ℝn and assume that x(t)


is a solution of x′ = f(t, x) on (a, b) and there is a sequence of t-values with
tn ↑ b (i.e., is an increasing sequence and converges to b as n → ∞)
such that limn → ∞x(tn) = x0 ∈ ℝn. Then, if there exist α, β, M > 0 such that
‖f(t, x)‖ ≤ M on D ∩ {(t, x)|0 ≤ b − t ≤ α, ‖ x − x0‖ ≤ β} it follows that limt
→ b x(t) exists and is equal to x0. Furthermore, if f(t, x) is continuous on D ∪
{(b, x0)}, then x(t) can be extended to (a, b] by defining x(b) = x0.

Note: Theorem 2.1 says the graph of x(t) depicted in Figure 2.1 cannot
oscillate in and out of the box for t near b, but must be squeezed into the
box; see Figure 2.1.
Proof. We first show that graph of the solution x(t) stays in the box for t
near b. Assume the hypotheses of the theorem are satisfied, but that there
are values of t arbitrarily close to b such that ‖x(t) − x0‖ > β. Choose m
large enough such that 0 < b − tm < α and (we can do this
since tn ↑ b and x(tn) → x0 as n → ∞), and such that . By
continuity, there exists tm < τ < b such that ‖x(τ) − x0‖ = β and ‖x(t) − x0‖ <
β, for tm ≤ t < τ.

Fig. 2.1 x(t) cannot oscillate in and out of the box for t near b.

Now

So,

This is a contradiction. Hence, it follows that ‖x(t) − x0‖ < β, for tm ≤ t ≤ b.


Therefore, the graph of x(t) stays in the box. Then, from

we have
Consequently, by the Cauchy Criterion limt→b−x(t) exists. (The Cauchy
Criterion is as follows: limt→b−x(t) exists if and only if for all ε > 0, there
exists a δ > 0 such that ‖x(t1) − x(t2)‖ < ε, for any t1, t2 ∈ (b − δ, b).) But
x(t) is continuous and hence limt→b− x(t) = x0.
Thus, we extend x(t) to (a, b] by defining x(b) := x0. Then x(t) is
continuous on (a, b]. Moreover, if f(t, x) can be defined at (b, x0) so as to be
continuous on D ∪ {(b, x0)}, then

Therefore, x(t) has a left-hand derivative at t = b which has the value f(b,
x0). This is a continuation of x(t) to (a, b]. ☐

Definition 2.2. Let x(t) be a solution of x′ = f(t, x) on an interval I. Then the


interval I is said to be a right maximal interval of existence for x(t) in case
there is no proper extension to the right. A left maximal interval is defined
similarly. I is called a maximal interval of existence for x(t) in case it is both
right and left maximal.

Definition 2.3. Let f(t, x) be continuous on D ⊆ ℝ × ℝn and let x(t) be a


solution of x′ = f (t, x) on an interval (a, b). Then x(t) is said to approach the
∂D as t → b, written x(t) → ∂D as t → b, in case, for any compact set K ⊆
D, there exists tK ∈ (a, b) such that (t, x(t)) ∉ K, for tK < t < b.

Note: If x(t) is a solution on an infinite interval, say (a, +∞), then x(t) →
∂ D as t → ∞.
Theorem 2.2 (Continuation Theorem). Let f(t, x) be continuous on an
open set D ⊆ ℝ × ℝn and let x(t) be a solution of x′ = f (t, x) on an interval
I. Then x(t) can be continued to be defined on a maximal interval (α, ω) and
x(t) → ∂D, as t → α and as t → ω.
Proof. We will prove that x(t) can be continued to be defined on a right
maximal interval. Then that extension can be continued to the left to be
defined on a left maximal interval.
Let be a sequence of nonnull open subsets of D such that are
compact, , for all n ≥ 1, and .
Exercise 11. Describe how such a sequence of sets can be constructed.
Let b be the right end point of I. If b = +∞, then I is already right
maximal. Then as noted above, x(t) → ∂D, as t → ∞.
Second, assume that b < ∞ and I is open at b. Then there are two cases:

(1) for all m ≥ 1, there exists τm ∈ I such that , for τm < t < b,
or
(2) there exists a sequence tn ↑ b and a set Km such that , for
all n ≥ 1,
Case 1: We claim if Case 2 holds, then I is already right maximal. So
assume Case 2, but suppose I is not right maximal. Then, there exists a
proper extension to the right. In particular, if τ ∈ I, then x(t) is a solution on
[τ, b]. Now {(t, x(t))| τ ≤ t ≤ b}(compact) ⊆∪kn which is an open covering.
Hence there exists a finite subcovering, so by construction, there exists m0
such that , which contradicts to our assumption in Case 2. Thus, in this case,
I is right maximal. We note that x(t) → ∂D as t → b is also satisfied.
Case 2: Assume that Case 1 holds. Then there exists a sequence tn ↑ b
and m ≥ 1 such that for any n ≥ 1. Since is compact, there exists a
subsequence , for some . By Theorem 2.1, there exists an extension of x(t)
to I ∪ {b}.
[From this point on, we will also resolve the case where the interval I is
closed at b.]
Now we have . By Corollary 1.3 of Theorem 1.4, there exists a δm > 0
such that x(t) can be extended to the interval [b, b + δm]. If , again by
Corollary 1.3 of Theorem 1.4, we can extend x(t) to [b + δm, b + 2δm].
Continue in this manner. But is compact, hence there exists a bound on the
number of times this “extending” can be done; i.e., there exists jm ≥ 1 such
that , but all such previously constructed coordinates belong to .
Let b1 = b + jmδm. It is true that (b1, x(b1)) ∈ D. Thus, since D = , there
exists m1 > m such that (b1, x(b1)) ∈ . By the same corollary, there exists a
δm1 > 0 such that x(t) can be extended to [b1, b1 + δm1]. Repeat the argument
above. So, we can say there exists an integer jm1 ≥ 1 such that , but .
As above, let b2 = b1 + jm1 δm1. Continuing the pattern, we obtain an
infinite sequence b < b1 < b2 < b3 < …, and an infinite sequence of integers,
m < m1 < m2 < …, such that x(t) is extended to the closed interval [b, bj] for
any j ≥ 1. Now

Let ω = supj ≥ 1{bj}. Then x(t) has been extended to an interval J = I ∪


[b, ω).
Now we establish that J is right maximal. If not, then x(t) has an
extension such that x(t) is a solution on [b, ω]. Consider {(t, x(t))| b ≤ t ≤ ω}
(compact) ⊆ ∪ Kn which is an open covering. Hence, there exists a finite
subcover and by the construction of {Kn}, there exists p such that {(t, x(t))|
b ≤ t ≤ ω} ⊆ Kp; a contradiction to (2.1) (i.e., this is a contradiction, since
there exists r − 1 > p, such that . Since br < ω, we have here that (br, x(br))
∈ Kp, yet from (2.1), .
Therefore, J = I ∪ [b, ω) is right maximal. By making a similar
argument, x(t) can be extended to the left, so that x(t) can be continued to a
maximal interval (α, ω).
Exercise 12. If I = (a, ω) and I is right maximal for a solution x(t) then x(t)
→ ∂D, as t → ω. Hint: Modify the proof of Cases 1 and 2.
This completes the proof of this theorem. ☐
If D = [a, b] × ℝn and f(t, x) is continuous on the slab D, then f(t, x) can
be extended to the entire (n + 1)-space, ℝ × ℝn, by

and this extension is continuous.

Corollary 2.1 Let f(t, x) be continuous on [a, b] × ℝn. Then for any x0 ∈
ℝn, the IVP x′ = f(t, x), x(a)= x0 has a solution x(t) which can be continued
to be defined on a maximal interval which will be either

(1) [a, b], or


(2) [a, ω), where a < ω ≤ b and ‖x(t)‖ → ∞, as t → ω.
proof. Since f can be extended continuously to ℝ × ℝn, then by Theorem
2.1, x(t) can be continued to a maximal interval, which is either (1) [a, b], or
a proper subinterval (2) [a, ω), ω ≤ b. For 2), define the compact set K = [a,
b] × {x | ‖x‖ ≤ M} ⊆ ℝ × ℝn ≡ D. By Theorem 2.1, x(t) → ∂ D as t → ω.
Hence, the graph of (t, x(t)) must leave the compact set K. Thus, there exists
tk, a < tk < ω, such that (t, x(t)) ∉ K, for tk < t < ω.
Since ω ≤ b, it must be the case that ‖x(t)‖ ≥ M, for tk < t < ω. By the
construction of K, limt→ω− ‖x(t)‖ = + ∞. As shown in the Figure 2.2, x(t)
leaves K at the top or bottom. ☐

Fig. 2.2 x(t) leaves K at the top.

Example 2.1. Let and consider

Fig. 2.3 x(t) → ∂D.

By separation of variables, the solution is . The interval (0, +∞) is left-


maximal and notice that, given any compact K ⊂ (0, +∞)×ℝ, there exists tK
> 0 such that , ∀0 < t < tK. Hence, x(t) → ∂ D as t → 0. Moreover, given
any compact K ⊂ (0, +∞) × ℝ, there exists tK > 0 for any tK < t < + ∞.
Hence, x(t) → ∂ D as t → ∞. See Figure 2.3.
Corollary 2.2. Let f(t,x) be continuous on the set Q = {(t, x) || t − t0 | ≤ a, ‖x
− x0‖ ≤ b}. Let M = max(t, x)∈Q‖f(t, x)‖. Then all solutions of the IVP x′ =
f(t,x), x(t0) = x0 extend to the interval [t0 − α, t0 + α] where .

Proof. First, we extend f continuously to ℝ × ℝn in the following way. Let


t, with |t − t0| ≤ a, be fixed and consult the Figure 2.4.
Fig. 2.4 The ball centered at x0 with radius b.

By vector addition, y = x0 + λ (x − x0), where λ > 0. Since y is on the


ball, ‖y − x0‖ = b. So, ‖x0 + λ(x − x0) − x0‖ = b, that is, . Thus, for any x
outside the ball of radius b, there exists y on the boundary of the ball such
that , which pulls x down to the ball. Thus for x outside the ball, define

In this way f is extended continuously to [t0 − a, t0 + a] × ℝn.


Now let t vary and consult the Figure 2.5.

Fig. 2.5 Extension of f(t, x) in t.

We extend f further by defining

Again f is continuous and thus f is extended continuously to ℝ × ℝn.


Now let x(t) be a solution of the IVP x′ = f(t, x), x(t0) = x0, where f(t, x) is
the extension above. We will argue the assertion of the theorem to the right;
i.e., that x(t) can be extended as a solution to [t0, t0 + α].
Let [t0, ω) be a right maximal interval of existence for x(t). Since Q is
compact, there exists tQ such that (t, x(t)) ∉ Q for tQ < t < ω. By continuity,
there exists some t1 such that t0 ≤ t1 ≤ tQ, (t1, x(t1)) ∈ ∂Q, and (t, x(t)) ∈ Int
Q, for t0 ≤ t ≤ t1.

Fig. 2.6 The graph of (t, x(t)) hits ∂Q at t1.

Assume that t0 < t1 < t0 + α (i.e., assume the graph of (t, x(t)) hits ∂Q
before t reaches t0 + α) (Note: t1 − t0 < α). Then ‖x(t1) − x0‖ = b Since Q is
closed, (t, x(t)) ∈ Q, t0 ≤ t ≤ t1. See Figure 2.6. Therefore, ‖f(t, x(t))‖ ≤ M,
for t0 ≤ t ≤ t1. Now from ds, we have
which is a contradiction. Hence, our assumption that t0 < t1 < t0 + α is false,
which implies that t1 ≥ t0 + α, and consequently x(t) extends to [t0, t0 + α].
Since x(t) was an arbitrary solution of the IVP, it follows that every
solution extends to [t0, t0 + α]. Similar arguments can be made for
extending to [t0 − α, t0]. ☐

Corollary 2.3. Let f(t, x) be continuous and satisfy a local Lipschitz


condition wrt x on the open set D ⊆ ℝ × {ℝ}n. Then solutions of IVP’s for
x′ = f (t, x) are globally unique in D.
Proof. Let (t0, x0) ∈ D, x(t) and y(t) be solutions of x′ = f (t, x), x(t0) = x0,
with respective maximal intervals of existence (α1, w1) and (α2, w2). We
prove the solutions are unique to the right.
Let τ = sup{s ≥ t0 | x(t) ≡ y(t) on [t0, s]}. Then x(t) ≡ y(t) on [t0, τ).
Assume that t0 ≤ τ < min{ω1, ω2}. Then both x(t) and y(t) are continuous
on [t0, τ], and thus by this continuity, x(t) ≡ y(t) on [t0, τ].
By the Picard Theorem, there exists α > 0 such that solutions of the IVP

exist and are unique on [τ − α, τ + α]. Thus, y(t) ≡ x(t) on [τ − α, τ + α],


hence on [t0, τ + α]. But this contradicts the definition of τ. So, τ < min{ω1,
ω2} is false, and hence, τ = min {ω1, ω2} (τ cannot be larger, since (α1, ω1)
and (α2, ω2) were maximal intervals).
We claim that ω1 = ω2. If ω1 < ω2, so that τ = ω1, then x(t) ≡ y(t) on [t0,
ω1), and y(t) would constitute an extension of x(t) to [t0, ω2). But x(t)
cannot be extended past ω1. Thus ω1 ≥ ω2. Arguing similarly, ω2 ≥ ω1, so
that ω1 = ω2.
Therefore, x(t) ≡ y(t) on [t0, ω1) = [t0, ω2). ☐
Exercise 13. Make the argument in the corollary to the left.
Exercise 14. Let f(t, x) be a continuous real-valued function on Q = {(t, x)
∈ ℝ × ℝ ||t − t0| ≤ a, |x − x0| ≤ b}. Let M = max(t, x) ∈ Q|f(t, x)| and . If x1 (t),
x2(t), …, xn(t) are solutions of
image
on [t0 − α, t0 + α], prove that z(t) = max1≤j ≤ n xj (t) and y(t) = min1≤j ≤ n xj (t)
are solutions on [t0 − α, t0 + α].

Remark 2.1 It is also the case that φ(t) = sup{x(t) | x(t) is a solution of
(2.2)} and ψ (t) = inf {x(t) | x(t) is a solution of (2.2) are solutions of (2.2)
on [t0 − α, t0 + α].

2.2 Kamke Convergence Theorem

Theorem 2.3 (Kamke Convergence Theorem). Let D ⊆ ℝ × ℝm be open


and let be a sequence of continuous m-vector valued functions on D such
that limn→∞fn(t, x) = f0 (t, x) uniformly on each compact subset of D. For
each n ≥ 1, let xn(t) be a noncontinuable solution of the IVP

Assume xn(t) is defined on (αn, ωn). Further assume limn → ∞ (tn, yn) = (t0,
y0) ∈ D. Then there exists a noncontinuable solution x0(t) of

with interval of existence (α0, ω0), and there exists a subsequence of the
sequence {xn(t)} such that, for each compact [a, b] ⊂ (α0, ω0), uniformly
on [a, b], in the sense that, there is an N[a, b] ∈ ℕ such that for nk ≥ N[a, b],
the compact and uniformly on [a, b].
Exercise 15. Prove that, from the last statement of the theorem, and ; in
particular .
Proof the theorem. We will prove that there is a solution x0(t) of (2.3)0 on
a right maximal interval of existence [t0, ω0) and a subsequence of the
sequence {xn(t)} such that for any τ with t0 < τ < ω0, there is a N[t0, τ] such
that , for nk ≥ N[t0, τ] and uniformly on [t0, τ]. See Figure 2.7. Having done
this, we return to the original statement of Theorem 2.3 and replace the
original sequence {xn(t)} by the subsequence then carry out an analogous
procedure on a subsequence to get a limit solution on a left maximal
interval.

Fig. 2.7 The solution sequence {xn(t)}.

Proceeding with the proof, let be a sequence of nonnull open sets such
that are compact, , and ∪ Kn = D. See Figure 2.8.
For each j ≥ 1, let ρj = dist( , complement of Kj + 1) > 0. Let
. Then is compact and . Now as n→ ∞, the
sequence fn(t, x) → f0 (t, x) on uniformly, which implies that there exists Mj
> 0 such that ‖fn (t, x)‖ ≤ Mj on for all n ≥ 0. It follows that, for any point
and any n ≥ 0, the IVP

Fig. 2.8 , and ∪ Kn = D.

has all of its solutions defined on [s − δj, s + δj], where by Corollary 2.2.
(Note: The calculation of δj is as follows: Take and . So, and then set δj =
αj as has been done previously in Corollary 2.2.)
Also, all solutions are uniformly bounded on [s − δj, s + δj], since ‖x(t)‖
≤ , where .
Furthermore, all solutions satisfy a Lipschitz condition, since for a
typical solution and , where τ, t ∈ [s − δj, s + δj]. Hence,

holds for all t, τ ∈ [s − δj, s + δj]; that is, every solution to each IVP (2.3)j
satisfies the same Lipschitz condition.
Now assume (t0, y0) ∈ Km1. Then there is an N1 such that, for all n ≥ N1,
(tn, yn) ∈ Km1 and , since (tn, yn) → (t0, y0). See Figure 2.9.
Fig. 2.9 The points t0 and tn.

At this stage we have, for n ≥ N1, [t0, t0 + ε1] ⊆ [tn − δm1, tn + δm1] ⊂ (αn,
ωn), because all solutions of the IVP exist on [tn − δm1, tn + δm1].
Furthermore, is uniformly bounded and equicontinuous on [t0, t0 + ε1].
Therefore, by the Arzelà-Ascoli Theorem, there is a subsequence of
integers such that the subsequence converges uniformly on [t0, t0 + ε1].
Call this limit on [t0, t0 + ε1] uniformly.
We claim that x0(t) is a solution of x′ = f0(t, x), x(t0) = y0 on [t0, t0 + ε1].
Let t ∈ [t0, t0 + ε1]. Then

since .
Let n1(k) → ∞. Then , and so,

This verifies the claim and so x0(t) is a solution of x′ = f0(t, x), x(t0) = y0 on
[t0, t0 + ε1].
Now, if (t0 + ε1, x0 (t0 + ε1)) ∈ Km1 (i.e., the point on the graph of x0(t) at
the end point of [t0, t0 + ε1]), then we can repeat this process, since (i.e., the
process we have gone through depends only on the fact that (t0 + ε1, x0(t0 +
ε1)) ∈ Km1). Hence, repeating the above process, we obtain a second
subsequence {n2(k)} ⊆ {n1(k)} such that uniformly on [t0 + ε1, t0 + 2ε1], and
consequently uniformly on [t0, t0 + 2ε1].
Continuing in this manner, we must reach a first integer j1 ≥ 1 such that
(t0 + j1ε1, x0 (t0 + j1ε1)) ∉ Km1, and we will have also obtained
corresponding subsequences {ni(k)}, 1 ≤ i ≤ j1, such that {n(i + 1)(k)} ⊆
{ni(k)}. Define , and assume that , for m2 (recall D = ). Then we start over
and repeat our construction as above.
Summarizing, we obtain sequences, , where the first point where you go
outside Km2, etc, and a sequence of sets Km1 ⊆ Km2 ⊆ Km3 ⊆ …, with a
subsequence of integers {n1(k)}, {n2(k)}, {n3(k)},….
Let .
We claim that x0(t) is a solution of x′ = f0 (t, x), x(t0) = y0 on [t0,ω0).
To see that this is a solution, let t0 < τ < ω0. Then by the definition of ω0,
there exists . By our construction, there is a subsequence
which converges uniformly to x0(t) on , hence, x0(t) is a
solution on and therefore is a solution on [t0, τ]. But τ was arbitrarily
selected, thus x0(t) is a solution on [t0, ω0). Consider now the array of
sequences:

Take the diagonal subsequence {nk} ≡ {nk(k)}. With this subsequence,


converges uniformly to x0(t) on each compact subinterval of [t0, ω0).
To see this, let [a, b] ⊆ [t0, ω0). Then again, there exists so
that, .
By our construction, there is a subsequence which converges
uniformly to x0(t) on , hence on [a, b]. However, .
Thus, for K ≥ i0, it will follow that [t0, and for
uniformly on [a, b]. Thus, the convergence
properties stated in this theorem are satisfied on [t0, ω0).
Moreover, it follows from the fact that , for each p, and
that [t0, ω0) is right maximal for x0(t).
To complete the proof of the theorem, take the sequence and let it
play the role of the original sequence. Make the analogous argument to the
left of t0. The subsequence constructed this time will be a subsequence of
which is in turn a subsequence of {xn}. ☐
Remark 2.2 The Kamke Theorem is valid in the case where D is a slab of
the form D = [a, b] × ℝm. Suppose as in the theorem that fn → f0 on all
compact subsets of D, and that (tn, yn) → (t0, y0), etc. See Figure 2.10.

Fig. 2.10 The Kamke Theorem is valid on D=[a, b]×ℝm.

All conditions of the theorem are satisfied by making the extension:

Then if K is a compact set of the open set ℝ × ℝm, then

In particular, we can satisfy the uniform convergence of to on any


compact K ⊆ ℝ × ℝm. Thus, in the presence of the other hypotheses of the
theorem, the Kamke Theorem applies equally well for slab regions.

2.3 Continuous Dependence of Solutions on Initial Conditions

In this and next sections, we consider consequences of the Kamke Theorem,


with one of the main consequences dealing with the continuous dependence
of solutions of IVP’s depending on initial conditions and parameters.
Corollary 2.4 (to the Kamke Theorem). Assume the hypotheses of the
Kamke Theorem and in addition that the limit IVP (2.3)0 has a unique
noncontinuable solution x0(t) with maximal interval (α0, ω0).
Then, given any compact interval [a, b] ⊆ (α0, ω0), there exists N such
that

(i) [a, b] ⊆ (αn, ωn), for all n ≥ N, and


(ii) limn → ∞ xn(t) = x0(t), for n ≥ N, uniformly on [a, b].

Proof. Assume that (i) is false; that is, there is no N such that [a, b] ⊆ (αn,
ωn), for all n ≥ N. Then there exists a subsequence such that . In the Kamke
Theorem, replace the original sequence with this subsequence. Relative to
this subsequence, the hypotheses of the Kamke Theorem are satisfied. By
the Kamke Theorem, there exists a subsequence such that converges to a
solution of IVP (2.3)0 with the convergence being uniform on each compact
subinterval of the interval of existence for the solution of (2.3)0.
But x0(t) is the unique solution of (2.3)0, hence converges to x0(t)
uniformly each compact subset of (α0, ω0), but this is a contradiction. Thus
(i) holds.
Assume now that (ii) is false. Then, there exists ε0 > 0 and a subsequence
such that at some points t ∈ [a, b] and for all nk. Now replace the
sequence in the Kamke Theorem with this subsequence. By the Kamke
Theorem, it follows that there exists a subsequence which converges
uniformly to a solution, to x0(t) by uniqueness, of (2.3)0 on each compact
subinterval of (α0, ω0); in particular, for i sufficiently large, we have , for all
t ∈ [a, b], which is a contradiction.
Therefore, limn→∞ xn(t) = x0(t) uniformly on [a, b] for n ≥ N. ☐
Remark 2.3 Above, it may be the case that the IVP’s

do not have unique solutions. Yet, if x0(t) is unique, then any sequence of
solutions of (2.3)n converge to x0(t) uniformly on compact subsets of (α0,
ω0).
For our next consequence of the Kamke Theorem, we see that if
solutions of IVP’s are unique, then whenever initial conditions are
perturbed slightly, the resulting solutions stay uniformly near to each other.
Theorem 2.4 (Continuous Dependence of Solutions on Initial
Conditions) Let f(t, x) be continuous on an open set D ⊆ ℝ× ℝn (also
holds on slabs with the appropriate extensions), and assume that IVP’s for
x′ = f(t, x) on D have unique solutions. Given any (t0, x0) ∈ D, let x(t; t0, x0)
denote the solution of

with maximal interval (α(t0, x0), ω (t0, x0)). Then for each ε > 0 and for each
compact [a, b] ⊆ (α(t0, x0), ω(t0, x0)), there exists a δ > 0 such that for all
(t1, x1) ∈ D, |t0 − t1 | < δ and ‖x1 − x0‖ < δ imply that [a, b] ⊆ (α(t1, x1),
ω(t1, x1)), the maximal interval of existence of the solution x(t; t1, x1) of

and ‖x(t; t1, x1) − x(t; t0, x0)‖ < ε on [a, b].

Proof. Assume that there is a (t0, x0) ∈ D, such that [a, b] ⊆ (α(t0, x0), ω(t0,
x0)), and an ε > 0 such that no such δ exists. See Figure 2.11. Choose a
sequence {δn} ↓ 0 such that one or the other of the conclusions fail for each
δn. Hence, for each n, there exists (tn, xn) ∈ D with |t0 − tn| < δn, ‖x0 − xn‖ <
δn. Then, (tn, xn) → (t0, x0), since δn ↓ 0, with one or the other of the
conclusions failing for the corresponding solution x(t; tn, xn) of the IVP

But this violates the previous corollary, since this sequence {x(t; tn, xn}
satisfies the hypotheses of the Kamke Theorem, plus we have a unique
solution x(t; t0, x0).
Thus, our assumption is false and the conclusions must hold. ☐
Exercise 16. With the hypotheses and notations of Theorem 2.4, prove that
α(t, x) is upper semi-continuous and ω (t, x) is lower semi-continuous on D.

Fig. 2.11 A solution x(t; t0, x0)


Remark 2.4 Suppose the slab D = [a, b] × ℝn is given and that f(t, x) is
continuous on D and that solutions of IVP’s are unique on D. Assume that
x0(t) is a solution of x′ = f(t, x) which exists on [a, b]. Then any solution
within ε of x0(t) will exist also on [a, b] by the Kamke Theorem and
Theorem 2.4.

2.4 Continuity of Solutions wrt Parameters

Another type of continuous dependence satisfied by solutions, in the


presence of certain uniqueness conditions, is the continuity of solutions
wrt parameters. For example, we might consider the dependence of
solutions upon the parameter λ in equations of the form

or

Theorem 2.5 (Continuity of Solutions wrt Parameters). Let f(t, x, λ) be


continuous on D1 × D2, where D1 ⊆ ℝ × ℝn and D2 ⊆ ℝm are open.
Assume that for each (t0, x0, λ0) ∈ D ≡ D1 × D2, the solution of the IVP

is unique. Then, for each (t0, x0, λ0) ∈ D and for each [a, b] ⊂ (α(t0, x0, λ0),
ω(t0, x0, λ0)) and for each ε > 0, there exists δ > 0 such that (t0, x0, λ1) ∈ D
and ‖λ1 − λ0‖ < δ imply [a, b] ⊂ (α (t0, x0, λ1), ω(t0, x0, λ1)) and that ‖x(t; t0,
x0, λ1) − x(t; t0, x0, λ 0)‖ < ε on [a, b].
Note: We are denoting the solution of the IVP x′ = f (t, x, λ0), x(t0) = x0
by x(t; t0, x0, λ0).

Proof. Set

i.e., z is an n + m vector with the first n components those of x and the last
m components those of λ.
Set

and the consider the IVP

which is now an IVP without parameter. We note that , for n + 1 ≤ i ≤


n + m, thus that in the solution, z(t), the components zi(t) are constant, for n
+ 1 ≤ i ≤ n + m. From the initial condition, we must have
; thus .
Hence, the λ’s in f1(t, x, λ), f2(t, x, λ),…, are these ’s, i.e., zi(t) = xi(t), 1
≤ i ≤ n, are the components of the solution of x′ = f(t, x, λ0), x(t0) = x0.
Therefore, the problem

Apply Theorem 2.4 to z′ = h(t, z) (note that the solutions zi(t) are unique
since solutions to are unique), and the conclusions follow. ☐
Remark 2.5 Suppose that f(t, x) is continuous on a slab region of the form
[t0, ∞) × ℝn. Further assume that the solution of the IVP
is unique and exists on [t0, ∞). Then by our results concerning continuity of
solutions wrt the initial conditions, given [t0, t1] and given ε > 0, there exists
δ > 0 such that ‖x1 − x0‖ < δ implies all solutions of

extend to [t0, t1] and satisfy ‖x(t; t0, x1) − x(t; t0, x0)‖ < ε on [t0, t1].

Definition 2.4. The solution x(t; t0, x0) is said to be stable in case, for each ε
> 0, there exists δ > 0 such that, for each x1 ∈ ℝn with ‖x1 − x0‖ < δ, the
solution x(t; t0, x1) exists on [t0,∞) and satisfies ‖x(t; t0, x1) − x(t; t0, x0)‖ < ε
on [t0,∞); i.e., a strong continuity property wrt x0 is satisfied with t fixed at
t0.

Example 2.2 (1) Consider the scalar equation x′ = x2 on [0,∞) × ℝ. If f(t, x)


= x2, then and hence solutions of IVP’s are unique. Consider the
solution x(t; 0, 0) of the IVP

By uniqueness of solutions of IVP’s, x(t; 0, 0) ≡ 0. We claim that x(t; 0, 0) is


not stable.
To see this, let δ > 0 be given and consider the solution x(t; 0, δ) of x′ =
x2, x(0) = δ. By separation of variables, .
We see that, for any δ > 0, x(t; 0, δ) doesn’t exist on all of [0, ∞) and in
fact . See Figure 2.12.
However, we note that there is continuous dependence on initial
conditions, provided we restrict ourselves to compact subintervals. Just
select δ sufficiently small that and the continuity remarks above
will be satisfied.
(2) Consider now the scalar equation x′ = − x. The solution x(t; 0, 0) ≡ 0
is stable. To see this, solve the IVP: x′ = − x, x(0) = δ and obtain x(t; 0, δ) =
δe−t which exists on [0, ∞) and |x(t; 0, δ) − x(t; 0, 0)| ≤ |δe−t| ≤ δ. Choosing δ
< ε, we see that the definition concerning stability of x(t; 0, 0) ≡ 0 is
satisfied.
(3) Consider
Fig. 2.12 .

The equation satisfies Lipschitz condition on slabs of the form [a, b] × ℝ2.
Let’s look at this equation on say [0, 2]. We can write the corresponding
integral equations

Suppose y(t) is a solution of the same differential equation with |y(0) − x(0)|
+ |y′(0) − x′(0)| < δ. Correspondingly, there are similar integral equations in
terms of y. Now, choose y such that y(0) ≠ c0, y′(0) ≠ c1. Subtract
corresponding integral equation expressions and thus obtain

Hence,

Set h(t) = |y(t) − x(t)| + |y′(t) − x′(t)|, and then by (2.4) and (2.5), we have
A special form of the Gronwall inequality states

implies , for all .


From the above form of the Gronwall inequality, , for
all t ∈ [0,2]. Hence,

Now, by our Lipschitz condition, the solution of the original IVP is


unique on [0, 2], and we see that the continuity wrt initial conditions in
Remark 2.5 made above are satisfied again; at least as far as y(t) and [0, 2]
are concerned, this is true, by taking . A simple modification can be
made for any compact interval [0, a].
Chapter 3

Smooth Dependence on
Initial Conditions and Smooth
Dependence
on Parameters

3.1 Differentiation of Solutions wrt Initial Conditions

In the preceding chapter, we discussed the continuous dependence of


solutions of IVP’s wrt initial conditions and parameters. We now consider
conditions under which solutions of IVP’s can be differentiated wrt initial
conditions and parameters, and we also examine properties of the resulting
functions.
Theorem 3.1 (Differentiation wrt Initial Conditions). Let f(t, x) be
continuous and have continuous first partial derivatives wrt the components
of x on an open D ⊆ ℝ× ℝn. Let x(t; t0, x0) = x(t) be the unique solution of
the IVP

and have maximal interval of existence (α, ω). Choose [t1, t2] ⊂ (α,ω) and
t0 ∊ [t1, t2 ]. Let A(t) be the Jacobian matrix, A(t) = fx(t, x(t)) (i.e., = fx (t,
x(t; t0, x0))). Then x (t; t0, x0) has continuous partial derivatives wrt the
components x0j of x0, for 1 ≤ j ≤ n, and wrt t0 on [t1, t2], hence on (α,ω).
Furthermore, inline is the solution of the IVP
where ej = (δ1j, δ2j,…,δnj)T with

and inline is the solution of the IVP

Proof. We consider first inline. Let 1 ≤ j ≤ n be fixed and let xm (t) = x(t;
t0, x0 + δmej), where {δm} → 0 (i.e., xm (t0) = x0 + δmej). Let [t1, t2] be a
fixed compact subinterval of (α, ω). Let η > 0 be such that K ≡ {(s, z), | t1 ≤
t ≤ t2, |s − t| ≤ η and ‖z − x(t; t0, x0)‖ ≤ η} ⊆ D.
By the Kamke Theorem, there exists N1 such that for all m ≥ N1, [t1, t2]
⊂ (αm, ωm), the maximal interval of existence of xm(t), and the graph {(t,
xm(t)) | t1 ≤ t ≤ t2} ⊆ K, i.e., ‖xm(t) − x(t)‖ ≤ η, for t1 ≤ t ≤ t2, and inline
xm(t) = x(t) uniformly on [t1, t2].
Recall here the form of the Mean Value Theorem for vector valued
functions, which we have seen earlier in Chapter 1: f(t, y) – f(t, x) =
inline where fx denotes the Jacobian matrix of f.
So, for all m ≥ N1,

Since xm (t) → x(t) uniformly on [t1, t2] and from the uniform continuity of
fx on K, it follows that as m → ∞,

Now, let

Then, by (3.1),

Now, by construction,
Moreover, inline is continuous on [t1, t2] ℝn, for all m ≥ N1, and
converges uniformly to fx(t, x(t))y on each compact subset of [t1, t2] × ℝn.
By the Kamke Theorem, it follows that inline uniformly on the compact
subinterval [t1, t2], where z(t) is the solution of the IVP

Now let’s look at z(t) in detail. We see that zm(t) = inline is the type
of difference quotient we would consider in looking for a partial derivative
wrt the x0j component. Above, we showed limm→∞zm(t) existed (or limδm→ 0
of the difference quotient existed). Since this applies to any sequence {δm}
with δm → 0 as m → ∞, we can make the stronger statement that given [t1,
t2] ⊂ (α, ω) with t0 ∈ [t1, t2] and for each ε > 0, there exists δ > 0 such that,
for all h ∊ ℝ with |h| < δ, [t1, t2] is contained in the maximal interval of
existence of x(t; t0, x0 + hej) and

that is, inline But 1 ≤ j ≤ n was arbitrary, thus partial derivatives wrt the
components of the initial vector x0 exist.
For differentiability wrt t0, let δm → 0 and let xm(t) denote the solution
x(t; t0 + δm, x0). As before, we have

Now define inline. In this case,

Note that x(t0) = x0 and xm(t0 + δm) = x0. In integral form,

Hence,

Consequently,
and as above, zm(t0) → − f(t0, x0). Then proceeding as in the first part of the
proof with uniform convergence, etc., and with the sequence of initial
conditions: (t0, zm(t0)) → (t0, − f(t0, x0)), we obtain inline is the solution of
the IVP

Example 3.1. Consider the autonomous (independent of t) system

Let D = ℝ× ℝ2. By the uniqueness of solutions of IVP’s, solutions x1, x2 are


continuous and differentiable functions of IC’s. Here x1(t) ≡ 0 and x2(t) ≡ 0,
i.e., inline. First compute

By Theorem 3.1, inline is the solution to the IVP

i.e.,

Hence,

It follows that z1(t) = cosh (t − t0), z2(t) = sinh (t − t0). Hence,

Now we know that

Hence,

where the initial condition is x1(t0) = h, x2(t0) = 0.


3.2 Differentiation of Solutions wrt Parameters

Suppose now that f(t, x, λ) is continuous and has continuous first partials
wrt the components of x and λ on an open set D ⊆ ℝ× ℝn× ℝm. Consider
the IVP

and let x(t; t0, x0, λ0) denote the solution. We wish to discuss inline, for 1
≤ j ≤ m.
Change the IVP to the nonparametric situation by inline, so that

By Theorem 3.1, inline is the solution of the IVP y′ = hz (t, z(t; t0, z0))·y
satisfying

and the Jacobian

If inline is the solution of IVP

then the last m components satisfy the IVP’s,

whose solutions are inline, and yn + j(t) ≡ 1. Thus, we have that inline is
the solution of the IVP

where

Theorem 3.2 (Differentiation wrt Parameters). Suppose f(t, x, λ) is


continuous and has continuous first partial derivatives wrt the components
of x and λ on an open set D ⊆ ℝ × ℝn × ℝm. Let (t0, x0, λ0) ∈ D and let x(t;
t0, x0, λ0) be the unique solution of

Then x(t; t0, x0, λ0) has partial derivatives wrt the components of λ0 on (α(t0,
x0, λ0), ω(t0, x0, λ0)) and inline is the solution of (3.2).

Remark 3.1. Consider the case of IVP’s for a linear system

where

Hence, the Jacobian is

i.e., fx(t, x(t; t0, x0)) = A(t), for all solutions x(t) of the original system of
differential equations.
Definition 3.1. Given a solution x(t; t0, x0) of x′ = f(t, x), x(t0) = x0, the
differential equation y′ = fx(t, x(t; t0, x0))y is called the first variational
equation of x′ = f(t, x) wrt the solution x(t; t0, x0).

Remark 3.2. Consider the IVP for the nth order scalar equation,

where f (t, y1, y2,…yn) : I × ℝn → ℝ is continuous.


We transform this into the first order system

that is,

where
For this first order system, we have

It follows from Theorem 3.1 that inline is the solution of the IVP

which is, in component-wise form,

Equivalently, we conclude that z1(t), the first component of the solution


of the IVP for this system, is the solution of the nth order IVP (with z1 = z)

But z1(t) is the first component of inline. So, inline, and in particular
inline is the solution of the nth order linear differential equation in (3.3)
and satisfies the IC’s there.
Note: In (3.3), the coefficient of z(j) is just a continuous function of t.
Example 3.2. This example illustrates Remark 3.2. Let inline and
consider

Now inline are continuous and hence the IVP has a unique solution. By
inspection, our unique solution of the given nonlinear IVP is x(t; 0, c) = x(t;
0, 1, 0) ≡ 1. We wish to calculate .
From above, inline is the solution of the IVP

i.e., inline is the solution of

Hence

Now calculate inline. In this case, we seek the solution of


So, inline.
Note: Above we use inline rather than inline to avoid
confusion. We could have just recalled that

Exercise 17. A. Let λ be an m-vector and let x(t; t0, c, λ) be the solution of
the IVP

(So c = (c1, c2,…, cn)T.) Determine the IVP which has inline as a
solution.
B. Apply problem A to each of the following:
(a) x′′ + λx = 0, x(0) = 0, x′(0) = 1, λ0 = 4.
(b) (1−t2)x′′ − 2tx′ + λ(λ+1)x = 0, t0 = 0, λ = n; solution is Pn(t).
(c) t2x′′ + tx′+ (t2 − λ2)x = 0, t0 = 1, λ = 0; solution is J0(t).
C. In B(a), calculate inline by solving the IVP, then differentiating wrt λ.
D. Given the scalar IVP

compute inline and inline, by applying Theorem 3.1, and also by


solving the IVP, then differentiating the solution.
E. Given the scalar equation x′ = λ + cos x, λ ∊ ℝ, compute inline by
applying Theorem 3.2.

3.3 Maximal Solutions and Minimal Solutions

For a final application of the Kamke Theorem, we have the following.


Definition 3.2. Let φ (t, x) be continuous on an open set D ⊆ ℝ × ℝ and let
(t0, x0) ∊ D. A solution x(t) of
on a maximal interval of existence (α0, ω0) is said to be a maximal solution
in case, for any other solution y(t) on a maximal interval (α1, ω1), we have
x(t) ≥ y(t) on (α0, ω0) ∩ (α1, ω1). A minimal solution is similarly defined.

Note: φ (t, x) is real-valued; i.e., φ (t, x) : D → ℝ.


Exercise 18. Show that a maximal solution of

is unique.
Theorem 3.3. Let φ (t, x) be continuous on an open set D ⊆ ℝ × ℝ and let
(t0, x0) ∊ D. Then the IVP
image
has a maximal solution xM(t) and a minimal solution xm(t).

proof. For each n ≥ 1, let un(t) denote a solution of the IVP

defined on a maximal interval of existence (αn, ωn). Then by the Kamke


Theorem, there is a solution x∗(t) of (3.4) on a maximal interval of
existence (α∗,ω∗) and a subsequence inline which converges uniformly
to x∗(t) on each compact subinterval of (α∗,ω∗).
Assume now that y(t) is another solution of (3.4) on a maximal interval
inline such that, for some inline and t1 > t0, y(t1) > x*(t1). Now
unk(t) → x*(t) on [t0, t1] as k → ∞ and so there exists {unk0} such that unk0 (t1)
< y(t1). Since unk0 (t0) = x0 = y(t0), it follows that there is

Fig. 3.1 The solutions x∗(t), y(t), and unk0 (t).

some t2 with t0 ≤ t2 ≤ t1 such that y(t2) = unk0 (t2) and y(t) > unk0 (t) on (t2, t1].
See Figure 3.1. Thus
Therefore,

which is a contradiction. Thus, no such y(t) exists and it follows that x∗(t) is
a maximal solution on [t0, ω∗). A similar argument shows that x∗(t) is a
minimal solution on (α∗, t0].
Now, for each n ≥ 1, let ωn(t) be a solution of

on a maximal interval (αn,ωn). Again, by the Kamke Theorem, there exists a


solution x∗∗(t) of (3.4), on a maximal interval (α∗∗,ω∗∗) and a Smooth
Dependence on Initial Conditions and Parameters subsequence inline
which converges uniformly to x∗∗(t) on each compact subinterval of
(α∗∗,ω∗∗). Arguing as above, it can be shown that x∗∗ is a minimal
solution of (3.4) on [t0, ω∗∗) and a maximal solution on (α∗∗, t0]. Hence

with maximal interval (α**, ω*), and

with maximal interval (α*, ω**).


(Note: xM and xm are unique.) ☐
Chapter 4

Some Comparison Theorems and


Differential Inequalities

4.1 Comparison Theorems and Differential Inequalities

For a function y(t), defined in some neighborhood of the point t0, the Dini
derivatives at the point t are defined as follows:

For the next few results, we will be concerned with differential


inequalities. In establishing these results, we will make use of maximal and
minimal solutions.
Theorem 4.1. Let φ (t, x) be continuous on an open set D ⊆ ℝ × ℝ and (t0,
x0) ∈ D. Let xM(t) be the maximal solution of
on a maximal interval (α, ω). Let v(t) be continuous and real-valued on [t0,
t0 + a] and satisfy the following:
(1) (t, v(t)) ∈ D, for t ∈ [t0, t0 + a],
(2) D+v(t) ≤ φ(t, v(t)), for t ∈ [t0, t0 + a), and
(3) v(t0) ≤ x0.
Then, v(t) ≤ xM (t) on [t0, t0 + a] ∩ (α, ω).

Proof. Assume the conclusion to be false. Then, there exists t1 > t0 with t1
∈ [t0, t0 + a]∩ (α, ω) such that v(t1) > xM(t1). Now v(t0) ≤ x0 = xM (t0) and
so, as in Theorem 3.3, there exists n ≥ 1 and a solution un(t) of

on [t0, t1] such that un(t1) < v(t1).


As in Theorem 3.3, there exists a t2 with t0 ≤ t2 < t1 such that un(t2) =
v(t2) and un(t) < v(t) on (t2, t1]. Then

Hence, D+[v(t) – un(t)]|t = t2 ≥ 0. Now, un is differentiable, consequently

which is a contradiction. Therefore, our assumption is false, and it follows


that v(t) ≤ xM(t) on [t0, t0 + a] ∩ (α, ω). ☐
Exercise 19. In each of the following, modify the hypotheses of Theorem
4.1 concerning v(t) in the indicated way, and prove (some of) the
corresponding results.
(1) D+v(t) ≥ φ(t,v(t)) on [t0, t0 + a], v(t0) ≥ x0 ⇒ v(t) ≥ xm(t).
(2) D−v(t) ≥ φ(t, v(t)) on (t0 – a, t0], v(t0) ≤ x0 ⇒ v(t) ≤ xM(t).
(3) D−v(t) ≤ φ(t,v(t)) on (t0 – a, t0], v(t0) ≥ x0 ⇒ v(t) ≥ xm(t).

Corollary 4.1 (To Theorem 4.1). If v(t) is continuous on [a, b] and D+v(t)
≤ 0 on [a, b], then v(t) is nonincreasing on [a, b].
Proof. In Theorem 4.1, take φ(t, x) ≡ 0. Let a ≤ t0 < t1 ≤ b and consider the
IVP

It follows that xM(t) = v(t0), for all t ∈ [a, b] and that v(t) ≤ v(t0), for all t0 ≤
t ≤ b. In particular, v(t1) ≤ v(t0).

Corollary 4.2 If ψ (t, x) and φ (t, x) are continuous real-valued on an open


set D ⊆ ℝ × ℝ such that ψ (t, x) ≤ φ(t,x) on D, if xM(t) is the maximal
solution of

where (t0, x0) ∈ D, and if v(t) is a solution of x′ = ψ (t, x) with v(t0) ≤ x0,
then v(t) ≤ xM(t) on a common right interval of existence.

Proof. We have that v′(t) = ψ(t, v(t)) ≤ φ (t, v(t)) for all t in the maximal
interval for v(t). Moreover, v(t0) ≤ x0. It follows from Theorem 4.1 that v(t)
≤ xM(t) on a common right interval of existence.

Exercise 20. Using Corollary 4.2, prove that the solution of the IVP

has a vertical asymptote somewhere between and .


Hint: In conjunction with Corollary 4.2, use the comparison equations x′
= + 2tx + x2 and x′ = a2 + x2, where a > 0.
t2
The solution of the IVP resembles Figure 4.1.
If v(t) is the solution, observe that v′(0) = 02 + 02 = 0, but v′(t)= t2 + v2
>0, for t > 0 and hence v(t) > 0, for t > 0.
A possible way to use the hints is as follows: First let v(t) be a solution
of x′ = t2 + x2. Then v′ (t) = t2 + v2(t) ≤ t2 + 2tv(t) + v2(t), for t ≥ 0. Find the
solution of x′ = t2 + 2tx + x2, x(0) = 0, apply Corollary 4.2 and show that v(t)
has an asymptote greater than .
For the second inequality consider x′ = a2 + x2, a > 0. As above let v(t)
be the solution of x′ = t2 + x2, x(0) = 0 (recall then v(a) >0), and notice that
v′(t) = t2 + v2(t) ≥ a2 + v2, for t ≥ a. So find the solution of x′ = a2 + x2, x(a)
= probably 0. If a is picked correctly, an application of Corollary 4.2 will
show that v(t) has an asymptote less than .

Fig. 4.1 The solution of the IVP resembles v(t).

Remark 4.1. If x(t) : [a, b] → ℝn is differentiable on [a, b), then D+ ‖x(t)‖


≤ ‖x′(t)‖ on [a, b).
Proof. Let t ∈ [a, b) and h > 0 such that t + h ∈ [a, b). Then

which yields

for h > 0. The above inequality is true for all h > 0 such that t + h ∈ [a, b).
Hence,
implying, by continuity,

For notational purposes, let Dr denote the right-hand derivative of a


function. If Dr exists, then Dr = D+ = D+.

Lemma 4.1. If x(t) is a differentiable vector-valued function on [a, b], then


the right-hand derivative, Dr‖x(t)‖, exits on [a, b) and satisfies Dr‖x(t)‖ ≤
‖x′(t)‖.

Proof. Let x, u ∈ ℝn be fixed and let 0 < θ ≤ 1 and h > 0. Then

Hence

and so

It follows that decreases, as h → 0+; i.e., the quotient is


a nondecreasing function in k ∈ (0, h]. To see this, let 0 < k < h, so that k =
θh, where . Then

thus the claim is true. (Furthermore, for h → 0−, a reversal of the


inequalities occurs and for h < k < 0.)
Now

Then,
and so

Here, the nondecreasing function is bounded below by –‖u‖ (and above


by ‖u‖). So, exists. In particular, if t ∈ [a, b) is fixed
then exists.
Note here that

Thus, for h > 0 such that t + h ∈ [a, b), we have

Now as h → 0, the right-hand side approaches 0, thus the left-hand side


approaches 0. So, if h ↓ 0, since exists from above, we
have exists and

Exercise 21. Work out a corresponding result in that D ℓ ‖x(t)‖ exists on (a,
b] and Dℓ‖x(t)‖ ≤ ‖ x′(t)‖ on (a, b]. Here, you will take into account that h <
0.
Corollary 4.3. Assume that φ (t, x) is continuous and nonnegative valued
on an open D ⊆ ℝ × ℝ, and let xM(t) be the maximal solution of x′ = φ (t, x)
satisfying x(t0) = x0 ≥ 0. Let y(t) be a C(1) n-vector valued function on [t0, t0
+ a] such that ‖y′(t) ‖ ≤ φ ≤ (t,‖y(t)‖) on [t0, t0 + a] and ‖y(t0)‖ ≤ x0. Then
‖y(t)‖ ≤ xM(t) on any common right interval of existence of y(t) and xM(t).
Further if zm(t) is the minimal solution of x′ = – φ(t, x) satisfying x(t0) = x1,
and again, if y(t) is a C(1) n-vector valued function [t0, t0 + a] with ‖y′(t)‖ ≤
φ (t, y(t)‖) on [t0, t0 + a] and ‖y(t0)‖ ≥ x1, then ‖y(t)‖ ≥ zm(t) on any
common right interval of existence of y(t) and zm(t).

Note: In this case, x′ = φ ≤ (t, x) is called a scalar comparison equation.


Proof. By Lemma 4.1, Dr ‖y(t)‖ ≤ ‖y′(t)‖, and so by the hypotheses,
Dr‖y(t)‖ ≤ φ(t, ‖y(t)‖). Now since ‖y ≤ (t)‖ is a real-valued function and ‖y
≤ (t0)‖ ≤ x0, if we apply Theorem 4.1 to ‖y(t)‖, we have ‖y(t))‖ ≤ xM ≤ (t).
For the other part, consider ‖y(t + h)‖ – ‖y(t)‖ ≥ – ‖y(t + h) –y(t)‖. For h
> 0,

Hence, Dr ‖y(t)‖ ≥ – ‖y′(t)‖, and so by the hypothesis, Dr‖ y(t)‖ ≥ –


φ(t,‖y(t)‖). Applying one of the corresponding 3 parts of Theorem 4.1 (in
Exercise 19), and using ‖y (t)‖ as the real-valued function, we have ‖y(t)‖ ≥
zm(t). ☐

Corollary 4.4 Let φ (t, x) be continuous on [t0, t0+a] × ℝ and be


nondecreasing in x, for each fixed t. Let xM(t) be the maximal solution of the
IVP

and let v(t) satisfy on [t0, t0+a], where x1 ≤ x0. Then


v(t) ≤ xM(t) on the intersection of the right maximal internal for xM(t) with
[t0, t0 + a].

Proof. Set on [t0, t0 + a]. Then v (t) ≤ z (t) on [t0, t0


+ a]. Also z′(t) = φ (t, v(t)). Since φ is nondecreasing, then z′(t) = φ (t, v(t) ≤
φ(t, z(t)). Moreover z(t0) = x1 ≤ x0, and hence by Theorem 4.1, z(t) ≤ xM (t).
Thus v(t) ≤ xM (t). ☐

Theorem 4.2. Let f(t, y) be continuous on a slab [t0, t0 + a] × ℝn and


assume ‖f (t, y)‖ ≤ φ (t, ‖y‖), where φ (t, x) is continuous on [t0, t0 + a] × [0,
+∞) with φ (t, x) ≥ 0. Let y(t) be a solution of the IVP

Then, if xM (t) is the maximal solution of the IVP

and if xM(t) exists on [t0, t0 + a], it follows that y(t) extends to [t0, t0 + a].

Proof. Assume that y(t) is a solution of (4.1) which does not extend to [t0, t0
+ a]. Then the maximal internal of existence for y(t) is of the form [t0, t0 +
η) with 0 < η ≤ a and ‖y(t)‖ → + ∞ as t → (t0 + η)−. However, on [t0, t0 +
η),

By Theorem 4.1, ‖y(t)‖ ≤ xM(t) on [t0,t0 +η]. But xM(t) exists on [t0, t0 + a];
thus, there exists such that on [t0, t0 + a], which
contradicts ‖y(t)‖ → +∞ as t → (t0+ η)–.
Therefore if y is a solution of (4.1), then y(t) extends to [t0, t0 + a]. ☐
Remark 4.2 (Application). Consider the IVP for the linear system.

where A(t) is a continuous n × n matrix function on an interval I ⊆ ℝ, h(t)


is a continuous n-vector function on I, and (t0, y0) ∈ I × ℝn.
By the continuity of A and h, f is Lipschitz on each subset [a, b] × ℝn, [a,
b] ⊆ I, and so by Theorem 3.3, IVP’s for (4.3) have unique solutions on I.
Now ‖f (t, y)‖ ≤ ‖A(t)‖‖y(t)‖ + ‖h(t)‖ on I × ℝn, and so if we let φ (t, x) =
‖A(t)‖x + ‖h(t)‖, then a scalar comparison equation as in Theorem 4.2 for
an extension to the right is given by
image
Exercise 22. Calculate the unique solution of the nonhomogeneous scalar
IVP
image
which will be the maximal solution, since solutions of IVP′s are unique.
(This is a simple differential equation which can be solved using integrating
factors.) Conclude that the solution y(t) of (4.3) satisfies ‖y(t)‖ ≤ xM (t) on
[t0, ∞) ∩ I. This implies the existence of an extension of y(t) and also
establishes a bound on ‖y(t)‖.
Exercise 23. There is another way in which you can determine a bound on
‖ y (t)‖. The solution y(t) of (4.3) is also a solution of
image
Hence,
image
Apply Theorem 1.2 (Gronwall inequality) to the inequality to obtain a
bound on ‖y(t)‖ on [t0, ∞) ∩ I and compare your result with the one given
in Exercise 22. Does Corollary 4.4 of Theorem 4.1 apply?

4.2 Kamke Uniqueness Theorem

One of the principal uses of Theorems 4.1 and 4.2 and the corollaries is to
obtain uniqueness theorems.
Theorem 4.3 (Kamke Uniqueness Theorem). Let f(t, y) be continuous on
Q = {(t, y)| t0 ≤ t ≤ t0 + a, ‖y – y0‖ ≤ b}. Let φ (t, u) be a real-valued
function satisfying the following conditions:
(1) φ is continuous on (t0, t0 + a] × [0, 2b].
(2) φ (t, 0) ≡ 0 on (t0, t0 + a].
(3) For any 0 < ∊ ≤ a, u(t) ≡ 0 is the only solution of u′ = φ (t, u) on (t0, t0
+ ∊] which satisfies u(t) → 0 and image.
Assume that for any (t, y1),(t, y2) ∈ Q with t > t0,

image
Then the IVP
image
has only one solution on any interval [t0, t0 + ∊], with 0 < ∊ ≤ a.

Proof. Assume the conclusion of the theorem is false so that the IVP y′ =
f(t, y), y(t0) = y0 has distinct solutions y1(t) and y2(t) on [t0, t0 + ∊] with 0 <
∊ ≤ a. Let y(t) ≡ y1 (t) – y2 (t). Then there exists a t1∈ (t0, t0 + ∊] such that
‖y(t1) ‖ = ‖y1 (t1) – y2 (t1) ‖ > 0 and ‖y(t)‖ < 2b on [t0, t1]. Then on (t0, t1],

image
Now let vm(t) be the minimal solution of the IVP,

image
and let (α, t1] be the left maximal interval of existence for vm(t) (of course,
(α, t1] ⊆ (t0, t1]).
It follows from part (3) of Exercise 19 that vm(t) ≤ ‖y(t)‖ on (α, t1]. We
claim that vm(t) can be continued to (t0, t1] such that 0 ≤ vm (t) ≤ ‖ y(t)‖ (not
necessarily as a minimal solution all the way to t0, but as a nonnegative
solution nevertheless).
First, if t0 < α < t1 and there exists α′ with α < α ′ < t1, such that vm(t) > 0
on (α′, t1] and vm(α′) = 0, then by continuity, image So,

image
is a solution of u′ = φ (t, u) and satisfies image.
For the other case, if t0 < α < t1 and vm(t) > 0 on (α, t1], since φ(t, u) is
bounded and continuous on [α, t1] × [0, 2b], it follows from our results of
continuation of solutions that vm(t) can be continued to a solution on [α, t1].
(i) If vm(α) = 0, repeat the argument of the previous case with α′ = α.
(ii) If vm(α) > 0, and of course 0 < vm (α) ≤ ‖y(α)‖, then vm(t) could be
continued yet further to the left as a minimal solution of u′ = φ (t, u) (with
u(α) = vm(α)), and satisfy 0 ≤ vm(t) ≤ ‖y(t)‖ on (α – δ, t1], for some δ > 0,
and hence as such is still a minimal solution of u′ = φ (t, u), u(t1) = ‖y(t1)‖,
which is a contradiction to the fact that (α, t1] is left maximal.
Therefore, either by construction, or from the impossibility of case (ii)
above, it follow that 0 ≤ vm(t) ≤ ‖y(t)‖ on (t0, t1]. Hence

image
So, vm (t) → 0, as t → t0. Also,

image
and so as t ↓ t0, we have

image
i.e., image, as t ↓ t0. From condition (3), vm(t) ≡ 0 on (t0, t1]; this is a
contradiction to vm(t1) = ‖y(t1)‖ > 0.
Therefore, it follows that y1(t) and y2(t) are not distinct solutions of the
IVP. ☐
Corollary 4.5 (Nagumo). If f(t, y) is continuous on Q = {(t, y)| t0 ≤ t ≤ t0 +
a, ‖y – y0‖ ≤ b} and if for any points (t, y1), (t, y2) ∈ Q with t > t0, image,
then the solution of the IVP
image
is unique to the right.
Proof. Define image on (t0, t0 + a] × [0, 2b]. Then φ satisfies (1) and (2)
of Theorem 4.3. Consider (t1, u0) ∈ (t0, t0 + a] × [0, 2b] and the IVP

image
It follows that image is the unique solution. It is the case that u(t) → 0 as t
→ t0, however, image unless u0 = 0. Thus condition (3) of Theorem 4.3 is
also satisfied.
Therefore, there is a unique solution to the right by Theorem 4.3. ☐
Theorem 4.4. Let f(t, y) be a continuous real-valued function on Q = [t0, t0
+ a] × [y0 – b, y0 + b] ⊆ ℝ × ℝ. Let φ (t, u) be a continuous real-valued
function on (t0, t0 + a] × [0,2b] and assume φ is nondecreasing in u for each
fixed t. Then, if the hypothesis of Theorem 4.3 are satisfied, it follows that
the sequence of Picard iterates image, n ≥ 1, converges uniformly on [t0,
t0 + α] to a solution of the IVP

image
where α = min image, M = maxQ | f (t, y) |.
We note that Theorem 4.4 applies to first order scalar equations.
Chapter 5
Linear Systems of Differential Equations
5.1Linear Systems of Differential Equations

We now turn to a detailed discussion of linear systems:

(1) x′ = A(t)x, a homogeneous first order system; and

(2) x′ = A(t)x + f(t), a nonhomogeneous first order system.

We will assume that A(t) = (aij(t)) is a continuous n × n matrix function on


an interval I and that the entries aij(t) are real- or complex-valued. We will
also assume that f(t) is a continuous n-vector function on I with real or
complex values.

We have previously shown that, if ‖x‖ = max1≤i≤n|xi|, for x ∈ ℝn, then for
A a constant matrix,

Now, we want to discuss the induced norm inline, where inline is the
Euclidean norm of ℝ. Prior to this, we will discuss some properties of the
inner product. An inner product is a mapping inline, where inline is the
set of complex numbers, defined by:

Then

(1) inline (linear in the first component).

(2) inline (conjugate linear in the second component).


(3) inline. So, inline and inline iff x = 0.

(4) inline

Lemma 5.1. Let x = (x1, x2, …, xn), y = (y1, y2, …, yn)∈ inlinen. Then

Proof. The first part is true by the triangle inequality and after that, the
Schwartz inequality applies.☐

Thus, if A = (aij) and inline, then

so that

Hence,

This expression will not equal ‖A‖ unless the rows and columns of A are
scalar multiples of each other. In a later setting, we may make use of this
expression as an upper bound for ‖A‖.

Let’s now let B be a normed vector space and suppose h : [a, b] ⊆ ℝ → B


is continuous at t0, i.e., inline.

Now if we take inline so that ‖A‖ = max1≤i≤n inline, then clearly


inline, for any fixed 1 ≤ i0 ≤ n.

Also, inline. So, if A(t) = (aij(t)), then


Theorem 5.1. A(t) is continuous at t0 iff apq(t) is continuous at t0, for all 1
≤ p, q ≤ n.

We now state and prove some basic theorems concerning linear systems.

Lemma 5.2. Assume A(t) and f(t) are continuous on I ⊆ ℝ. Then for each
point (t0, x0) ∈ I × ℝn, each of the IVP’s

has a unique solution on I.

Proof. Since f(t) ≡ 0 in (5.1), we will directly deal with (5.2). First, for x1,
x2 ∊ ℝn, we have

Thus, if [a, b] ⊆ inline, we have

so that a Lipschitz condition is satisfied wrt x on each compact [a, b] ⊆ I.


By Theorem 1.3, each of the IVP’s has a unique solution on I.☐

corollary 5.1. If A(t) is continuous on I ⊆ ℝ, then for each t0 ∊ I, the


unique solution of

is x(t) ≡ 0.

Our next results are concerned with the solution space of (5.1).

Theorem 5.2. Assume that A(t) and f(t) are continuous on I ⊆ ℝ. Let x1(t),
x2(t), …, xm(t) be solutions of (5.1); let α1, α2, …, αm be constants. Then
inline is a solution of (5.1). Moreover, if y1(t) and y2(t) are solution of
(5.2), then y1(t) − y2(t) is a solution of (5.1).

Proof. First, by

we have inline is a solution of (5.1).

For the other part,

Therefore, y1(t) − y2(t) is a solution of (5.1).☐

We will denote the set of all continuous ℝn-valued functions on an interval


I ⊆ ℝ by C[I, ℝn]. Similarly, C[I, ℝn] is the set of all continuous inline
n-valued functions on I ⊆ ℝ. C[I, ℝn] is a vector space over ℝ and C[I,

inline n] is a vector space over inline.

Theorem 5.2 shows that the solution space (collection of solutions) of x′ =


A(t)x is a subspace of C[I, ℝn] or C[I, inline n] depending upon whether
A(t) is real- or complex-valued.

Recall that if V is a vector space over a field K, then a subset U ⊆ V is said


to be linearly independent (L.I.) in case for any finite set of distinct vectors
{x1, x2, …, xm} in inline, where α1, …, αm ∈ K. A subset W ⊆ V is said
to span V in case every x ∈ V can be expressed as a finite linear
combination of vectors from W. A subset B ⊆ V is said to be a basis for V
in case B is L.I. and B spans V. All bases of a given vector space V have the
same cardinality.

In particular, if V = ℝn, K = ℝ, then dim V = n with basis inline. In fact,


any set of n L.I. vectors in ℝn is a basis.
Theorem 5.3. Let x1(t), x2(t), …, xm(t) be solutions of (5.1). Then, if there
are constants α1, α2, …, αm and a point t0 ∊ I such that inline, then it
follow that inline on I.

Consequently, the solution space of (5.1) is an n-dimensional subspace of C


(I, ℝn) (or C[I, inlinen]). Furthermore, if x1(t), x2(t), …, xn(t) are
solutions of (5.1) such that, for some t0 ∈ I, x1(t0), …, xn(t0) are L.I. in ℝn
(or inlinen), then x1(t), …, xn(t) constitute a basis for the solution space of
(5.1).

Proof. For the first assertion, assume x1(t), …, xm(t) are solutions of (5.1)
and that α1, …, αm are scalars and t0 ∊ I such that inline. By Theorem
5.2, inline is a solution of (5.1). Moreover, x(t0)= 0, and so by Corollary
5.1, x(t) ≡ 0 on I.

Assume now that x1(t), …, xn(t) are solutions of (5.1) such that at some t0,
x1(t0), …, xn(t0) are L.I. vectors in ℝn. Such solutions exist; e.g., for each 1
≤ j ≤ n, let xj(t) be the solution of

We claim that any such set x1(t), …, xn(t) are L.I. vectors in ℝn, for all
points t ∊ I. If not, then there are scalars α1, …,αn not all zero and t1 ∊ I,
such that inline. By the first assertion, inline, and in particular inline,
which contradicts to the L.I. of inline. Hence inline are L.I. in inlinen,

We now show that x1(t), …, xn(t) span the solution space of (5.1). Let z(t)
be any solution of (5.1). Since x1(t0), …, xn(t0) are L.I. in ℝn, they must
constitute a basis for ℝn. Now z(t0) ∊ ℝn, so there are scalars α1,…,αn such
that inline. Now z(t) and inline are both solutions of
and so by Lemma 5.2, inline, for t ∊ I.

Therefore, inline span and hence form a basis for the solution space of
(5.1). ☐

corollary 5.2. The solution set of (5.2) consists of all y(t) = inline, where
y0(t) is some fixed solution of (5.2) and x1(t),…, xn(t) are L.I. solutions of
(5.1).

Proof. Consider the IVP (5.2):

Let y(t) be the solution.

Let xj (t) satisfy

for each 1 ≤ j ≤ n. Then, these xj’s are L.I.

Let y0(t) be the solution of

Then the solution of (5.2) will be inline, since y(t) − y0(t) and inline
satisfy the same IVP for (5.1)☐

Exercise 24. Assume that f(t, x) is continuous and has continuous first
partial derivatives wrt the components of x on an open set D ⊆ ℝ× ℝn. Let
(t0, x0) ∊ D and (α, ω) be the maximal interval of existence of x(t; t0, x0),
the solution of inline and xj(t) = inline Show inline on (α, ω), where
fj(t, x) is the jth component of f(t, x); i.e.,
5.2Some Properties of Matrices

Let inline denote the set of all n × n matrices with complex entries. Then,
we have the following properties of inline.

(1) inline is a vector space over inline with vector operations defined
for α ∊ inline, A = (aij) and B = (bij) as

As a complex vector space, dim inline= n2.

(2) Define AB = C, where inline. In general, AB ≠ BA. We have


associative properties, distributive, α(AB) = A(αB) = (αA)B, etc. inline is
an “algebra”.

(3) AT = (aij)T = (aji) and so (AB)T = BTAT. Also, inline and the adjoint,
inline. The identity matrix I = (δij), and if associated with A, there is a
matrix B such that AB = BA = I, then A is said to be nonsingular. We write
A –1 = B.

If A ∊ inline is such that det A ≠ 0, and B = (bij), where bij is the cofactor
of aij, then A is nonsingular and inline.

Exercise 25. Prove if A is nonsingular, then det A ≠ 0. (Note: det (AB) = det
A. det B).

It is also the case that A is nonsingular iff A is a one-to-one transformation.


Equivalently, the fact that A is nonsingular is equivalent to the statement:
Ax = 0 iff x = 0.

Other facts which may be established are

From this, A is nonsingular iff inline are nonsingular.


If A, B are nonsingular, then so is AB and (AB)-1=B-1A-1. Also, AA-1 = I
implies (AA-1)* = I* = I, and so (A-1)* A* = I. Thus, (A-1)* =(A*)-1.
Similarly inline.

(4) Range and null space of A ∊ inline.

The range space inline, and the null space inline = inline. From liner
algebra, inline inline. Moreover,

Let us recall that the inner product of x, y ∊ ℂn is given by inline inline.


From this, we have inline and consequently, for a given A ∊ inline and
a vector b ∊ ℂn, Ax = b has a solution x ∊ ℂn iff inline and inline dim
inline.

(5) Properties of ‖A‖

(i)‖Ax‖ ≥ 0 and ‖Ax‖= 0 iff A = (0).

(ii)‖Ax‖ ≤ ‖ A‖‖ x ‖.

(iii)‖α A‖ = ‖ α ‖‖ A‖.

(iv)‖A + B‖ ≤ ‖ A‖ + ‖ B ‖.

(v)‖AB‖ ≤ ‖ A‖‖ B ‖.

Proof of (iv). We use inline. So, for all x ∊ ℂn,

If ‖A‖ + ‖B‖ = M′ > 0, then inline, for all x ∊ ℂn. Hence,


Proof of (v). Notice inline, for any x. If inline, then inline and so
‖AB‖ ≤ ‖A‖‖B‖.☐

We continue with this briefest of a review with glance concerning metric


spaces.

A metric space (S, d) consists of a set S and a mapping d : S× S → ℝ+ such


that for any s1, s2, s3 ∊ S,

(a) d(s1, s2) ≥ 0, and d(s1, s2) = 0 iff s1 = s2;

(b) d(s1, s2)= d(s2, s1);

(c) d(s1, s2) ≤ d(s1, s3) + d(s3, s2).

A metric space S is said to be complete in case every Cauchy sequence in S


converges in S.

For A = (aij), B = (bij) ∊ inline, define d(A, B) = ‖ A − B‖. Then d is a


metric on inline. From statements above, we have

Remark 5.1. inline is a complete metric space.

Proof Let {Ak} be a Cauchy sequence in inline. Then for each ε > 0,
there exists Nε such that d(Ak, Al) = ‖ Ak − Al‖ < ε,for all k, l ≥ Nε. Thus,
for all 1 ≤ p, q ≤ n,

Hence, inline is a Cauchy sequence in ℂ, for each fixed 1 ≤ p, q ≤ n.


Since ℂ is complete, for each 1 ≤ p, q ≤ n, there exists an inline such that
inline. If we let inline, then inline inline.

Hence, lim k → ∞ d(Ak, A0) = 0, and consequently, inline is complete in


the metric d(A, B) = ‖A − B‖.☐
Since inline is also a normal vector space, inline is a Banach space. In
fact, inline is a Banach Algebra.

Remark 5.2. In a complete metric space, a sequence converges iff it is


Cauchy.

Exercise 26. Let inline be a sequence in inline. Assume inline and


that AkC = CAk, for all k ≥ 1, where C ∊ inline. Prove that A0C = CA0.

5.3 Infinite Series of Matrices and Matrix-Valued Functions

For what is to follow concerning linear differential equations, we need to


discuss what is meant by infinite series of matrices.

With an infinite series of matrices: inline, we can associate the sequence


of partial sums, inline.

If inline, then the sequence of partial sums converges and inline


diverges, then inline diverges.

Example 5.1. Consider

This series converges for any A ∊ inline and t ∊ ℝ.

To see this convergence, consider the series of numbers

which converges to e|t|‖ A‖, for all |t|,‖A‖. Thus, for this series of numbers,
the Cauchy criterion is satisfied by its sequence of partial sums, inline.
Hence, for each ε > 0, there exists Nε > 0 such that
Now let {Sn} be the sequence of partial sums associated with inline. Then
for the same ε > 0 and for all n > m ≥ Nε,

Hence, {Sn} form a Cauchy sequence in inline and hence {Sn} converges
by the completeness of inline; I.e., inline converges, for all t, A.

Since the above series closely resembles the exponential series, we define
inline.

In a similar way we can show that inline converges; and we define


inline. Other recognizable series are treated similarly.

In what follows, we define what we mean by continuity, differentiability,


and integrability of a matrix-valued function. Let A:[a, b] → inline.

1.A(t) is continuous at t0 ∊ [a, b], in case inline.

2.A(t) has derivative B ∊ inline at t = t0, in case

We write A' (t0) = B.

3.Let inline be a partition of [a, b] with inline. Select inline. We say


that A(t) is Riemann integrable on [a, b] in case there exists a B ∊ inline
such that for all ε > 0, there exists δ > 0 such that

for any partition Π with ‖Π‖ < δ.

We can easily prove the following:

(1) A(t) is continuous at t0 iff each aij(t) is continuous at t0.


Proof. The conclusion is proved by making use of

(2) A(t) is differentiable at t0 iff aij(t) is differentiable at t0.

Proof.. It is proved by making use of

This says, in fact, that inline.

(3) A(t) is integrable iff aij(t) is integrable.

Proof It can be proved by making use of the above definition of


integrability of A(t) and inequalities similar to those in (1) and (2). It will
follow that

Now if we assume differentiability, we can verify the following:

(4) If A(t), B(t) ∊ inline, then inline.

(5) If inline, then inline, and inline.

(6) If inline, then inline inline.

(7) If A(t) ∊ inline and A(t) is nonsingular, then A–1(t) is differentiable


and

Proof. First inline, where B(t) is the transposed matrix of cofactors of A.


Since A(t) = (aij (t)) is differentiable, it follows that bij(t)’s are
differentiable and also that det A(t) is differentiable. Hence, A-1(t) is
differentiable.
Now

yields

which implies

(8) inline

(9) Suppose the matrix Φ (t) satisfies the D.E. Φ ′(t) = A(t) Φ (t). Then,

If Φ is nonsingular, then inline. Therefore,

Thus Φ–1(t)* satisfies the D.E.

5.4Linear Matrix System

Now from our previous theory, since h(t, x)= A(t)x + f(t), where A(t) ∊
inline is continuous and f(t) ∊ ℂn is continuous, h(t, x) satisfies a
Lipschitz condition wrt x on each slab [a, b]× ℂn, it follows from the Picard
Existence Theorem that the linear system

image

where t0 ∊ J and J is an interval, has a unique solution on all of J which is


a solution of the integral equation
image

and is also the uniform limit of Picard iterates on each compact subinterval
of J.

In summary, if A(t) and f(t) are continuous on J, then the unique solution of
the IVP (5.4) can be written as

where c = (c1,…, cn)T, xj(t) is the solution of the IVP

and z(t) is the solution of

In a similar way, we can take under our consideration IVP’s for a matrix
system,

image

where A(t) and B(t) are continuous n × n matrix functions on an interval J


⊆ ℝ, and C is a fixed constant n × n matrix, and t0 ∊ J.

The matrix IVP (5.6) is equivalent to the IVP for a system of n2 scalar
linear equations

Also, inline is a solution of the matrix equation iff each of its column
vectors inline is a solution of the vector equation

where bj(t) is the jth column of B(t). (Note: One obtains the jth column of
A(t) Φ (t) by multiplying A(t) by the jth column of Φ (t); I.e., A(t)Φ j(t).)
In light of our previous discussions, the unique solution of the IVP

can be obtained as the uniform limit of the sequence of Picard iterates as


before:

In the case when A ∊ inline is constant, we give special attention to IVP’s


for the homogeneous matrix equation. Consider

By Picard iteration,

By previous results, the Picard iterates converge uniformly on any compact


interval [a, b] ⊆ ℝ, and moreover, the Φn consists of the nth partial sum
whose limit we designated by the exponential function; i.e., the solution
inline.

Observe that since this is a solution of the D.E. above, we also have

Exercise 27.

(i)Show that inline is nonsingular, for each t ∊ ℝ, and that inline.


(Note: Verifying the equality will demonstrate that inline is nonsingular.).

(ii)Show etAB = BetA, for all t ∊ ℝ, iff AB = BA.

(iii)Show et(A + B) = etAetB, for all t ∊ ℝ, iff AB = BA.

Note: etB is the unique solution of


From this point on, if we haven’t already stated it, assume that matrix and
vector function are continuous on an internal J ⊆ ℝ.

Lemma 5.3. If X(t) is a matrix solution of the matrix equation X′ = A(t)X


and if c ∊ ℂn, then x(t) = X(t) c is a solution of the vector system x′ = A(t)x.

Proof. Notice

Thus, if x(t) = X (t) c, then we have x′(t) = A(t)x(t).☐

Lemma 5.4. If X(t) is a solution of the matrix equation X′ = A(t)X, then


either X(t) is nonsingular for all t ∊ J, or X(t) is singular for all t ∊ J.

Proof. Assume that for some t0 ∊ J, X(t0) is singular. Now recall that a
matrix B ∊ inline is singular iff there exists c ∊ ℂn, c ≠ 0, such that Bc =
0. Thus there exists c ≠ 0 such that X (t0) c = 0.

Let x(t) = X (t) c. Then from above, x(t) is a solution of

By uniqueness of solutions of IVP’s, x(t) is the unique solution and hence,


x(t) ≡ 0, i.e., X(t)c ≡ 0 on J. Hence, X(t) is singular, for all t ∊ J.☐

Note: Lemma 5.4 says that if c ∊ inline (X(t0)), for some t0, then c ∊
inline (X(t)), for all t ∊ J. This is strong and is due to the fact that X(t) is a
solution of the D.E.

Theorem 5.4. Let X(t) be a solution of X′ = A(t)X and let t0 ∊ J. Then det
X(t) = det inline, where inline. (Note: This also confirms Lemma 5.4.)

Proof. Recall that if X(t) = (xij(t)) is a solution of X′ = A(t)X, then inline,


for all 1 ≤ i, j ≤ n.
Now, denoting

then

If we select the kth summand

then, since inline, we have

(By elementary row operations, every row cancels with each of the sums in
the kth row.) Hence,

If we set ψ(t) = det X(t), we have then

Using integrating factors, inline = constant = K.

So, K = det X(t0) and hence,

Consider now Theorem 5.4 applied to the nth order linear equation,

As a vector system, we write


or

Furthermore, as we have seen, y(t) is a solution of y′ = A(t)y iff inline,


where y1(t) is a solution of the nth order linear equation (5.8). Thus, X(t) is
a matrix solution of X′ = A(t)X iff each column of X(t) is of the form of y(t)
above; i.e., iff

where xi(t), 1 ≤ i ≤ n is a solution of the n order linear equation (5.8). Then


consider

which is called the Wronskian of the solutions x1(t), …, xn(t) of the nth
order linear equation (5.8) and is denoted by W(t; x1, …, xn). From
Theorem 5.4,

Definition 5.1. A nonsingular matrix solution X(t) of the matrix D.E. X′ =


A(t)X will be called a fundamental matrix solution. The fundamental matrix
solution of the IVP:

will be denoted by X(t, t0) (i.e., this is the solution which satisfies X(t0, t0)
= I).

Remark 5.3. The solution of the IVP

is given by X(t, t0)C.

Proof. If X(t) is a solution of X′ = A(t)X and if C ∊ inline, then


i.e., X(t)C is also a solution of the matrix equation. Hence X(t, t0)C is a
solution which satisfies the initial condition X(t, t0)C = IC = C.☐

Another observation is, if Y (t) is a solution of X′ = A(t)X and t0 ∊ J, then Y


(t) ≡ X(t, t0) Y (t0) by the above remark.

Definition 5.2. The system x′ = –A*(t)x is called the adjoint system wrt the
system x′ = A(t)x.

Exercise 28. (i) Show that for any s, t, t0 ∊ J, X(t, t0) = X(t, s)X(s, t0).
Hence [X(s, t0)]−1 = X(t0, s).

(ii)Let A(t) be continuous on [0, +∞) and assume that ‖x(t)‖ Is bound on [0,
+∞) for each solution of the vector equation x′ = A(t)x. Let X(t) be a
fundamental matrix solution of X′ = A(t)X. Then prove that ‖X−1(t)‖ is
bound on [0, +∞) iff Re is bound below on [0, +∞).

(iii)Let X(t) be the fundamental matrix solution of X′ = A(t)X. Then Y (t) is


a fundamental matrix solution of the adjoint system X′ = −A*(t)X iff Y*
(t)X(t) ≡ C, where is C is nonsingular.

Now let’s consider the IVP

The solution, as we have seen, can be expressed as


, where c = [c1, …, cn]T, xj(t) is the solution
of x′ = A(t)x, x(t0) = ej and z(t) is the solution of x′ = A(t)x + f(t), x(t0) = 0.
We would like to analyze z(t) a little closer. To begin with, since X(t, t0) =
[x1(t), …, xn(t)], we can write the solution x(t) as x(t) = X(t, t0)c + z(t).
Theorem 5.5 (Variation of Constants Formula). Let X(t) be a
fundamental matrix solution of the matrix D.E. X′ = A(t)X. Then inline ds
is the solution of the IVP

Proof. We seek a solution z(t) of x′ = A(t)x + f(t) of the form z(t) = X(t)y(t),
and we try to determine y(t). Now inline.

Thus, if z(t) is a solution, we must have

which yields

that is, y′(t) = X−1(t)f(t).

Also, to have z(t0) = X(t0)y(t0) = 0, we must have y(t0) = 0, since X(t0) is


nonsingular.

Consequently,

and so

Since z(t) = X(t)y(t), we conclude .☐

Note: X(t)X−1(t0) is a solution of the IVP


so that by uniqueness, X(t, t0) = X(t)x−1(t0). It follows from Theorem 5.5
that

Hence, finally the solution of the IVP

can be written as

Exercise 29. Show that the solution of the IVP

can also be expressed in the form ds, where Y (t) is


a fundamental matrix solution of X′ = −A*(t)X.

Remark 5.4. The unique solution of X′ = A(t)X + B(t), X(t0) = C is


.

5.5Higher Order Differential Equations

Let us again consider the two equations:

By our previous results, the equivalent vector systems are


where

Let x1(t), …, xm(t) be solutions of (5.9) on J. Then (5.11) has corresponding


solutions, y1(t), …, ym(t) given by

Lemma 5.5. x1(t), …, xm(t) are L.I. in the vector space C(n−1)[J, ℂ] iff
y1(t), …, ym(t) are L.I. in the vector space C[J, ℂn].

Proof. Assume that y1, …, ym are L.D. in C[J, ℂn]. So, there exist α1, …,
αm, not all zero, in ℂ such that on J. Hence, the sum in the
first component is zero. In other words, α1x1(t) + … + αmxm(t) ≡ 0 on J.
Therefore, x1, …, xm are L.D.

Conversely, assume that x1, …, xm are L.D. in C(n−1)[J, ℂ]. Then, there
exist α1, …, αm, not all zero, in ℂ such that α1x1(t)+…+αmxm(t) ≡ 0 on J.
Upon differentiating n−1 times, we have on J, for
all 1 ≤ i ≤ n−1. From the manner in which the yi(t) were constructed, we
have α1y1(t)+…+αmym(t) ≡ 0 on J, or y1, …, ym are L.D.

Now a matrix solution X(t) of (5.11) has the form


So,

X(t) is a fundamental matrix


iff X(t) is nonsingular,
solution
iff X(t)c = 0 yields c = 0 ∊ ℂ,
the columns of X(t) are L.I. in C[J,
iff n
ℂ ],
x (t), …, xn(t) are L.I. in C(n−1)[J,
iff 1
ℂ].

In fact, the fundamental matrix solution X(t) of (5.11) satisfying the initial
condition X(t0) = I has as the jth element in the first row, the solution xj(t)
of (5.9) which satisfies

Thus, if x1, …, xn satisfy the above initial conditions, then by Theorem 5.4,

Example 5.2. Let x1(t) and x2(t) be solution of x′′ + 2tx′ − t2x = 0 satisfying
the respective initial conditions

Then
Now the unique solution of the IVP corresponding to (5.9)

is given by ds, where X(t, s) is the solution of

Recall

Now the first component x(t) of the solution is a


solution of equation (5.10). Since y(t0) = 0, x(t) satisfies the initial
conditions x(i)(t0) = 0, 0 ≤ i ≤ n − 1.

The first component x(t) of y(t) is given by

where u(t, s) is the solution of the IVP of equation (5.5)

And u(t, s) is frequently called the Cauchy Function for the equation
In summary, the unique solution of the IVP

is given by

Example 5.3.

(1) x(n) = 0.

Then Thus, the unique solution of

is given by

(2) x′′ + x = 0.

Then u(t, s) = sin(t − s), Thus, the unique solution of

is given by
(3) From (1), we can derive Taylor’s formula with remainder.

Theorem 5.6. Let f(t) ∊ C(n − 1)[a, b] and suppose f(n)(t) exists and is
integrable on [a, b]. Then

Proof. Consider the IVP

This has a unique solution which must be f(t). The solution of the
homogenous equation x(n) = 0 is given by a suitable combination of the
solutions xj(t) satisfying

Note by (1). Hence

(4) Applying (2) we solve

where

First, considering the homogenous equation x′′ + x = 0, two solutions


satisfying
are given respectively by x1(t) = cos t, x2(t) = sin t. Then the solution of the
IVP is

where χ[π,∞)(s) is the indicator function.

(5) For this example, we recall our variation of constants formula in


Theorem 5.5 for the solution of

The solution was given by , where we arrived at this by


looking for a solution of the form z(t) = X(t)y(t) where X(t) is a fundamental
matrix solution of the homogeneous equation and y(t) satisfied inline.

In light of this recall from elementary differential equation courses, we


consider the equation

Suppose x1(t) and x2(t) are L.I. solutions of the homogeneous equation x″ +
p1 (t)x′ + p2 (t)x = 0. Then we sought (in the elementary differential
equation course) a solution of (5.13) in the form z(t) = c1 (t)x1 (t) + c2 (t)x2
(t) (i.e., the “constants” vary, hence the name).

Now if we assume inline, and if inline, then it is the case that z(t) = c1
(t)x1 (t) + c2 (t)x2 (t) is a solution of (5.13).

Thus, if

then we have a solution of (5.13). Putting this in matrix form


which corresponds to X(t)y′(t) = f(t). That is,

and from this we have a solution of (5.13): z(t) = c1x1 + c2x2.

For our next consideration we will be concerned with solutions of the


adjoint equation associated with

where we assume the pi(t)’s are complex-valued functions which are


continuous on an interval J ⊆ ℝ.

From the associated vector system y′(t) = A(t)y, consider the adjoint system
y′(t) = − A*(t)y, where

In component form, the adjoint system becomes

If the pi’s are continuous on J, then any solution y(t) of the adjoint system
belongs to C(1) [J,ℂn]. To consider a scalar equation adjoint to (5.14), one
is usually concerned with differentiability of products inline.

If inline is differentiable, then inline is differentiable, since inline. So,


from above,

Similarly, if inline and inline are in turn differentiable, then inline is


differentiable and
Continuing, we obtain that the nth component yn satisfies

We conclude that, if y(t) is a solution of the adjoint system on J such that


for its nth component yn(t), we have inline is (n − j)th differentiable on J,
for all j = 1,2,…, n, then yn(t) is a solution of the scalar equation (5.15) on
J. The equation (5.15) is frequently called the adjoint of the original scalar
equation (5.14).

In the above development, the differentiation of the pi’s is important. For, if


inline has a derivative at t0 ∊ J and if yn (t0) ≠ 0, then inline is
differentiable, since inline. Hence, if inline is not differentiable, the
product inline is not likely to have a derivative unless yn = 0, which isn’t
often.

Exercise 30. The first part of this exercise is not related to the above
theory, but perhaps you will be able to complete it successfully. Assume
that p1 (t),…,pn (t) are continuous on an interval J. Then, if [a, b] ⊆ J, any
solution x(t) ≢ 0 of x(n) + p1 (t)x(n − 1) + … + pn (t)x = 0 can have at most a
finite number of zeros on [a, b]. Is this true for the nth component yn(t) of a
solution y(t) ≢ 0 of the adjoint system? (For this second part, use the above
theory.)

Exercise 31. Recalling that [X−1(t)]* is a solution of the adjoint matrix


system Y′ = − A*(t)Y, where X(t) is a solution of X′ = A(t)X, obtain 3
linearly independent solutions of (i.e., solutions of
in terms of Wronskians of solutions of the
equation which it is the adjoint of, namely inline. (Need Wronskians in
computing the inverse matrix.)

Exercise 32. Prove that X′ = A(t)X has a fundamental matrix solution X(t)
which is unitary; that is, X−1(t) = X* (t), iff A(t) ≡ − A*(t). Is this possible
for systems obtained from nth order scalar equations?
Exercise 33. If A(t) ≡ −A*(t) (i.e., A(t) is skew-symmetric), show that for
any solution x(t) of x′ = A(t)x, 〈x(t), x(t)〉 Is a constant.

5.6Systems of Equations with Constant Coefficient Matrices

For some time now, we will consider systems of equations with constant
coefficient matrices. Let A be a constant matrix.

Definition 5.3. A number λ ∊ ℂ Is said to be an eigenvalue of the matrix A


∊ inlinen in case there exists a nonzero vector c such that Ac = λc, where
c is called an eigenvector.

Lemma 5.6. The eigenvalues of A are the roots of the polynomial equation
det[A − λI] = 0.

Proof. Suppose Ac = λc, for some nonzero vector c. Then,

Ac = λc iff (A − λI) c = 0 iff [A − λI] is singular iff det[A − λI] = 0.

Definition 5.4 The det[A − λI] is called the characteristic polynomial of the
matrix A.

Lemma 5.7 If λ1,…, λm are distinct eigenvalues of A and if c1,…, cm are


the associated eigenvectors, then the vectors c1,…, cm are L.I.

Proof. Assume that c1,…,cm are not L.I. Then, there exists a first integer j
with 1 < j < m such that c1,…,cj−1 are L.I., but c1,…,cj are L.D. Hence,
there exist scalars α1,…, αj, not all zero, such that α1c1 + … + αjcj = 0.
Since Aci = λci, we have, 0 = A. 0 = A(α1c1 + … + αjcj) = α1λ1c1 + … +
αjλjcj. Moreover, λj ≠ 0. For if λj = 0, we would have above that α1λ1c1 +
… + αj−1λj−1cj−1 = 0 and λi ≠ 0, for 1 ≤ i ≤ j − 1. So, αiλi = 0, 1 ≤ i ≤ j − 1,
since c1,…,cj − 1 are assumed L.I., thus αi = 0, 1 ≤ i ≤ j − 1. Since α1c1 + …
+ αjcj = 0 and cj ≠ 0, we would then have αj = 0, which is a contradiction to
the assumption that they were not all zero.

Consequently λj ≠ 0, and since the λi’s are all distinct, we have , for all
i ≠ j.

Now and αjcj = −(α1c1 + … + αj


−1cj−1) yield

Arguing as above, we have αi = 0, 1 ≤ i ≤ j. Again, we get a contradiction,


thus our supposition concerning the existence of α1,…,αj, not all zero, is
false and we have that c1,…,cm are L.I.☐

Lemma 5.8. If λ1…,λm are distinct eigenvalues of A with associated


eigenvectors c1,…, cm, then are L.I. solutions
of x′ = Ax.

Proof. First we verify that x1(t),…, xm(t) are solutions:

Hence, is a set of solutions of x′ = Ax.

Now which are L.I. by the above lemma. Thus, by


Theorem 5.3, x1(t),…,xm(t) are L.I. solutions.☐

In relating this lemma to results from your elementary ODE’s course,


consider the nth order scalar linear equation x(n) + p1 x(n − 1) + … + pnx =
0, where the pi’s are constants. Then there is the usual associated first order
system:
Hence,

which implies

and λn + p1 λn − 1 + p2λn−2 + … + pn = 0 is the well-known characteristic


or auxiliary equation from elementary ODE’s.

In returning to the last lemma, if A has n distinct eigenvalues λ1,…,λn with


associated eigenvectors c1,…,cn, then the solutions ,1≤j≤n
constitute a basis for the solution space of x′ = Ax.

Question: What if det[A − λI] = 0 has roots of multiplicity greater than 1?

Recall that in the case of the scalar equation, if λ0 is a root of λn + p1λn − 1


+ … + pn = 0 of multiplicity k, then are k L.I. solutions
corresponding to the root λ0, i.e., is a solution for all
values of the scalars ci.

In answering our question, suppose we look at the possibility of x′ = Ax


having a solution of the form
where the ci’s are n-vectors and λj is a multiple root of the characteristic
equation. To see what is involved in x(t) being a solution, let us examine x′
(t) − Ax(t):

Now

yields

If

then x′(t) − Ax(t) = 0, or x′(t) = Ax(t), or x(t) is a solution. We will now


check to see if conditions (5.16) are indeed the case. For this, let λ be an
eigenvalue of A of multiplicity k > 1 and write B = A − λI. Then B is
singular, because λ is an eigenvalue. Let (B) be the null space of B, i.e.,
(B) = {c ∊ ℂn| Bc = 0}. Now (B) ⊆ (B2) ⊆ (B3) ⊆ … , and since
our space ℂn is n-dimensional, n < ∞, thus there exists an integer r ≥ 1 such
that (B) ⊆ (B2) ⊆ … ⊆ (Br), and (Br) = (Bj), for all j ≥ r.
Moreover, (Br) has dimension k, hence, we can find a basis d1,…, dk for
(Br). Let us consider one of these basis vectors dl. For notation, define
. (Let “:=” denote “defines”.) Then, we define , as

Now, , since was a basis element of (Br).


Comparing these results to conditions (5.16), we see that we have the same
type of equations here with playing the part of ck (things are listed in the
reverse order here from the manner listed in (5.16)).

We conclude then that

is a solution of x′ = Ax, where is a basis vector in (Br) (note that


which must be the initial condition “this solution” satisfies). Since
dim (Br) = k, there are k of these vectors dj, each of which produces a
solution corresponding to λ. Hence we get k solutions for the multiplicity of
the root λ of the characteristic polynomial.

Example 5.4. Let

Then,
which yields

Hence, λ = 2 is an eigenvalue of the matrix A of multiplicity 4.

Let

We now determine the successive null-spaces of B, B2, etc.

First, find the form of C such that BC = (0). Well,

So, c2 = 2c1, c4 = 0, and c3 is arbitrary. Hence,

Now
We could extend the basis for the (B) to a basis for (B2) = ℂ4 or we
could find another basis for ℂ4.

Take {e1,e2,e3,e4} as our basis for (B2). Apply B to each of these basis
vectors as is done in the calculation above denoted by (5.17). Since B2 (any
vector) = (0), from the form of a solution given in (5.18), four solutions
will be of the form (note: r = 2, λ = 2 relative to (5.17))

We have

with corresponding solutions


As noted before,

From this, we have the fundamental matrix of the system

as

Now, there is an alternate method for finding 4 L.I. solutions of x′ = Ax,


with A as above. Consider the basis spanning

We can extend this to a basis of (B2) = ℂ4 by

Corresponding to the first two basis elements, since

we have the two solutions


We can also obtain x3(t) and x4(t) by applying B to

so that

Here, our x1(t) and x2(t) are linear combinations of those found using the
first method.

Exercise 34. Given A, find 4 linearly independent solutions of x′ = Ax for


each of the following:

Definition 5.5. A,B ∊ n are said to be similar if there exists a non-


singular matrix C such that C−1AC = B. (This defines an equivalence
relation on n.)

Theorem 5.7. Every matrix A ∊ n is similar to a block diagonal matrix


where J0 is the diagonal matrix

and each of the blocks Ji, 1 ≤ i ≤ s, is of the form

where λq+i’s on the diagonal are the same and 1’s are on the super
diagonal.

The number λ1, λ2, …, λq+s are the eigenvalues of A and are repeated in J
according to their multiplicities. Furthermore, if λj is a simple eigenvalue,
then λj appears in J0, hence, if all the eigenvalues are simple, J is a diagonal
matrix. The matrix J is called the “Jordan Canonical Form of A” and is
unique up to rearrangements of the blocks Ji, 0 ≤ i ≤ s, along the diagonal.
(So, there exists a C such that C-1AC = J.)

Let us look at the previous example and see how the Jordan form relates.
Recall x′ = Ax, where
We found that det[A – λI] = (λ – 2)4 and

and B2 = (0). Extending the basis for (B) to a basis for (B2) = ℂ4, we
obtained

Applying B to c2 and d2,

we have Bc2 = −2c1 and Bd2 = d1. Hence [A − 2I]c2 = −2c1 and or, Ac2 =
2c2 − 2c1. On the other hand, Bc1 = [0] ⇔ [A − 2I]c1 = 0. So, Ac1 = 2c1.

Now Bd2 = d1 yields Ad2 = 2d2 + d1 and Bd1 = [0] yields Ad1 = 2d1. Form
C by taking c1, c2, d1, d2 as its columns; i.e., C = [c1, c2, d1, d2]4×4.

Now
where C−1C = I because

This is almost the Jordan form. We need to normalize the −2 entry. Thus,
we return to the point where we extended our basis and we choose

Then, [A − 2I]c2 = c1. So, Ac2 = 2c2 + c1 and continuing as above, we


obtain

This is the Jordan canonical form of A.

Consider now the matrix IVP: X′ = AX, X(0) = I, where A is a constant


matrix. We have previously shown that the solution is given by X(t) = etA.
Let C be a nonsingular matrix such that C−1AC = J, the Jordan canonical
form of A.

Let Y (t) = C−1X(t)C = C−1etAC. Then

Furthermore, Y (0) = C−1IC = I. So, Y (t) is the solution of the IVP

Hence, Y (t) = etJ and

Hence, if X(t) = CY (t)C−1, and if we can find Y (t) = etJ, we have an idea of
what X(t) = etA looks like.

Let calculate etJ. First

where

Thus
Hence, calculating etJ is reduced to calculating for each block.

First,

After summing, we get

Next, consider an arbitrary block,

For brevity, let λ = λq+i, so that


Now

yield . Hence

that is,
Thus, we know what each block of etJ looks like. Since etA = CetJC−1
where C is a nonsingular constant matrix, the elements in etA are of the
form where each pj(t) is a polynomial in t. Furthermore, if the
largest of the blocks, Ji, 1 ≤ i ≤ s, (not J0), is an m × m matrix, then all
polynomials appearing in the elements of etA are of degree ≤ m − 1.

Since each solution of x′ = Ax is of the form x(t) = etAc0, where c0 ∊ ℂn,


the elements of the vector x(t) are also of the form This leads
to the following remarks concerning solutions of x′ = Ax.

Remark 5.5.

(1)If Re λj < 0, for each eigenvalue λj of A, then all solutions x(t) are such
that is bounded on [0, +∞) and as t → +∞.

Reason: Using the max norm, where x(t) is a vector, if we look at a piece of
the polynomial of some entry, if Reλj < 0, as t → +∞.
So, as t → +∞,

(2)If Re λj ≤ 0, for each eigenvalue λj of A, then all the solutions x(t) are
bounded on [0, +∞) iff the λj’s for which Reλj = 0 appear only in J0, the
diagonal block in the Jordan canonical form of A.

Reason: = eiwt = cos wt + i sin wt, for Reλj = 0.

Thus the only way for to be unbounded is that an element other than
zero must be off the diagonal (where Reλj ≤ 0).

(3)If Re λj > 0, for some eigenvalues of A, then x′ = Ax has some solutions


which are unbounded on [0, +∞).

5.7The Logarithm of a Matrix


Let’s now consider another question. If A ∊ n, etA is a nonsingular
matrix, for all t ∊ ℝ, and in particular eA is nonsingular.

Question: If A is nonsingular, is there a B ∊ n such that eB = A? If so, we


will call B a logarithm of A; i.e., B = log A.

Let J be the Jordan canonical form of A and assume C−1AC = J. Then if eD


= J, it follows that CeDC−1 = CJC−1 = A. So, = A. If D = log J, then
−1
CDC = log A. Furthermore, if D is in block form, say

then

Now we want eD = J, thus it suffices to determine D0, D1, …, Ds such that


= Ji, 0 ≤ i ≤ s.

Note that, if A is nonsingular, then all the eigenvalues are nonzero, and
hence, each eigenvalue has a logarithm (in the complex sense, log λ = ln|λ|
+ i arg λ).

Now
because

Now we look at a typical block,

Now λIm is diagonal, so by techniques used with D0 and J0, log(λIm) =


(logλ)Im. If we can determine

then it and (log λ)Im commute and we will have

Thus the problem of determining log Ji is reduced to determining


Recalling that is absolutely
convergent for |x| < 1 and recalling the form of Zm, we have formally that

This is a finite sum, since Thus,


Thus, it is the case that (5.20) is obtained. Consequently,

In ending our discussion of constant coefficient matrix theory, let’s apply


this last concept to our previous example x′ = Ax, where

We calculated the matrix C which transforms A into Jordan form to be

and we obtained

since the blocks of the Jordon form are the same, we need only to calculate
the logarithm for one block.

Note: m = 2 and λ = 2, so

Hence,
Therefore,
Chapter 6

Periodic Linear Systems and Floquet


Theory

6.1 Periodic Homogeneous Linear Systems and Floquet Theory

Definition 6.1. A matrix-valued function A: (−∞, +∞) → is said to have


period ω > 0 if A(t + ω) = A(t), for all t ∈ ℝ.
Theorem 6.1 (Floqute). Let a matrix-valued function A: (−∞, ∞) → be
continuous and have period ω > 0. Then a fundamental matrix solution for
the linear system X = A(t) X can be written in the form X(t) = Y(t)etR, where
R ∈ is a constant matrix and Y(t):(−∞, +∞) → is continuous and has
period ω. Furthermore, Y (t) is nonsingular on (−∞, ∞).
Proof. Let X(t) be a fundamental matrix solution of X′ = A(t) X. Then by the
chain rule

Hence, X(t + ω) is also a solution of X' = A(t)X. By the previous theory,


there exists a constant C ∈ such that X(t + ω) = X(t)C; i.e., these solutions
are constant multiples of each other. We can determine C as follows: X(0 +
ω) = X(0)C yields C = X−1(0)X(ω). C is nonsingular and consequently C has
a logarithm. Hence, there exists a constant matrix R such that C = eωR
(actually, ).
Define
Obviously, Y(t) has the following properties:
(1) Y(t) is continuous on ℝ.
(2) Y(t) is nonsingular.
(3) Y(t) is periodic of periodic ω, i.e.,

Remark 6.1. Since Y (t) is continuous on ℝ and periodic, each entry in Y (t)
is continuous and periodic, hence bounded. So, ‖Y(t)‖ is bounded on ℝ.
Moreover, by the way the determinant is defined, det Y(t) is periodic. Since
Y(t) ≠ 0, det Y(t) is bounded away from zero, and since it is periodic, then it
is bounded. Consequently ‖Y−1(t) is bounded.
Definition 6.2 Let A(t) and B(t) be defined on [a, +∞). Then A(t) is said to
be kinematically similar to B(t) on [a, +∞) in case there is an absolutely
continuous (we will think differentiable) nonsingular matrix function L(t)
on [a, +∞) such that ‖L(t)‖ and ‖L−1(t)‖ are both bounded on [a, +∞) and
such that the transformation x = L(t)y transforms the system x′ = A(t)x into
the system y′ = B(t)y; i.e., L−1(t)A(t)L(t) − L−1(t)L′(t) ≡ B(t) on [a, +∞).
Exercise 35. (i) Show that kinematic similarity is an equivalence relation on
the set of all continuous n × n matrix functions on [a, +∞).
(ii) Show that if A(t) is continuous and has period ω on (−∞,∞), then A(t)
is kinematically similar to a constant matrix on ℝ.
Note: Kinematic similarity can be thought of as an extension of
similarity, by taking L to be a constant matrix.
Now let A(t) be continuous and ω-periodic on ℝ. If X(t) is a fundamental
matrix solution of X′ = A(t)X, then there is a nonsingular matrix C such that
X(t + ω) = X(t)C, for all t ∈ ℝ.
Remark 6.2. The matrix C is not uniquely determined by A(t), however C
is unique up to similarity.
Proof. Let X(t) and Φ(t) both be fundamental matrix solutions of X′ = A(t)X.
Then there are matrices C and D such that X(t + ω) = X(t)C and Φ (t + ω) =
Φ (t)D. Furthermore, there is a nonsingular matrix B such that X(t) = Φ (t)B,
since they are both fundamental matrix solutions. So,
Hence, C = B−1DB; i.e., C and D are similar. ☐
It follows then that A(t) does uniquely determine the eigenvalues of C,
because similar matrices have the same eigenvalues. In fact, these
eigenvalues are called the characteristic multipliers of the periodic system
X′ = A(t)X.
For the above matrix C, where X(t + ω) = X (t)C, if σ1, …, σn are the
eigenvalues of C and λ1, …, λn are the eigenvalues of R, where C = eωR
(i.e., ), then if both sets of eigenvalues are properly ordered, we will have
that . The numbers λj are not uniquely determined, but are determined up to
additive integer multiples of . The numbers λj, 1 ≤ j ≤ n, are called the
characteristic exponents of the periodic system X′ = A(t)X.
Remark 6.3. Corresponding to each eigenvalue σ of C, where X(t + ω) =
X(t)C, there exists a nontrivial vector solution x(t) of the vector system x′ =
A(t)x such that x(t + w) = σx(t) for all t ∈ ℝ.
Proof. Let σ be an eigenvalue of C and let x0 be an associated eigenvector,
then Cx0 = σ x0. Let x(t) be a solution of x′ = A(t)x given by x(t) ≡ X(t)x0.
Then,

Remark 6.4. This last remark tells us when we can obtain periodic
solutions of a periodic system; i.e., if 1 is an eigenvalue of C, then the
system x′ = A(t)x has a nontrivial ω-periodic solution.
Suppose σ is an nth root of unity and σ is an eigenvalue of C. Then there
exists a nontrivial solution x(t) with x(t + ω) = σx(t). We iterate this; that is,
replace “t” by “t + ω”. Hence, x(t + 2ω) = σ x(t + ω) = σ 2x(t),…, x(t + nω)
= σnx(t) = x(t), (since σn = 1). Thus, in the case, x′ = A(t)x has an nω-
periodic solution.
Remark 6.5. For our next observation, if σ is an eigenvalue of C and |σ| ≤
1, then x′ = A(t)x has a nontrivial solution x(t) which is bounded on [0, ∞).
(Note: C is nonsingular, thus σ ≠ 0 and σ−1 exists.) To see the bounded part,
let t ∈ ℝ be arbitrary, but fixed, and let ω ∈ ℝ. See the Figure 6.1.
Fig. 6.1 The points Kω, K = 0,1,2,….

Then, there exists n ∈ ℤ such that nω ≤ t ≤ (n + 1)ω. Therefore, t − nω


∈ [0, ω].
Using our above iteration technique, x(t) = x((t − nω) + nω) = σnx(t −
nω). So, ‖x(t)‖ = |σ|n‖x(t − nω)‖ ≤ M, where M = max0≤t ≤ ω ‖x(t)‖.
Note: If |σ| = 1, then there is a solution bounded on all of ℝ.
Example 6.1 Consider the scalar equation x′ = a(t)x, where a(t) is
continuous and ω-periodic on ℝ. Solving the IVP

we find .
Therefore, x(t) is ω-periodic iff .
If x0≠ 0, then

Example 6.2 (Hill’s equation). Consider x″ + p(t)x = 0, where p(t) is


continuous and ω-periodic on ℝ. Then the corresponding first order system
is given by y′ = A(t)y, where

Let x1(t), x2(t) be solutions of x″ + p(t)x = 0 satisfying the initial


condition and . Then a fundamental matrix solution of X′ = A(t)X is given
by

By the Floquet theory, we have X(t + ω) ≡ X(t)C, for all t. At t = 0, X (ω) =


X(0)C = IC = C, so

Notice also that by Theorem 5.4, and

where . If α ∈ ℝ and α2 < 1, then eigenvalues are , implying |σ|2 = 1. So,


both eigenvalues satisfy |σ| = 1, and in this case, from Remark 6.5, all the
solutions of the D.E. are bounded on ℝ.
Next, we provide some concluding remarks concerning Hill’s equation:

where φ(t) is ω-periodic and is a constant.


Remark 6.6. We have that the characteristic multipliers σ1, σ2 are roots of
σ2 − 2ασ + 1 = 0, where . Since the constant term in the quadratic is 1, we
have σ1 σ2 = 1, and hence |σ1| |σ2| = 1. If σ1 and σ2 are distinct such that |σ1|
= |σ2| = 1, then all the solutions of the D.E. are bounded on ℝ. (Therefore,
the first derivatives are also bounded).
In particular, this will be the case if α ∈ ℝ and α2 < 1. On the other
hand, if |σ1| < 1, then |σ2| > 1 and the D.E. has two L.I. solutions, one
bounded on [0, ∞) with ‖x(t)‖ → 0, as t →∞, and the other unbounded on
[0, +∞).
That is, the solutions

and

where Cc1 = σ1 c1, Cc2 = σ2 c2. Hence,

As a last result for the homogeneous equation X′ = A(t)X, where A(t) is


ω-periodic, we have the following remark.
Remark 6.7. x(t) is a nontrivial ω-periodic solution of x′ = A(t)x iff 1 is a
characteristic multiplier.
Proof. Let x(t) be the solution of

Assume x(t) is a nontrivial ω-periodic solution of x′ = A(t)x. Since x(t) is


nontrivial, x(0) = c0 ≠ 0. Now x(t) = X(t)c0, so X(t)c0 = x(t) = x(t + ω) = X(t
+ ω)c0 = X(t)Cc0 (where X(t + ω) = X(t)C). Hence, Cc0= c0 which implies 1
is a characteristic multiplier.
For the converse, if 1 is a characteristic multiplier, then Cc0 = C0, where
we again get X(t + ω) = X(t)C. Then x(t) = X(t)c0 will be a nontrivial ω-
periodic solution of

6.2 Periodic Nonhomogeneous Linear Systems and Floquet Theory

Let us now turn our consideration to periodic nonhomogeneous linear


systems. So, consider

where A(t) is a continuous n × n matrix function on ℝ with period ω, and


f(t) is a continuous n-vector function on ℝ with period ω.
Theorem 6.2 Let (6.1) be a periodic system as defined above. Then a
solution x(t) of (6.1) has period ω iff x(0) = x(ω).
Proof. Assume x(t) is an ω-periodic solution. Then, x(t + ω) = x(t), for all t
∈ ℝ. Hence, x(ω) = x(0).
On the other hand, assume that x(t) is a solution of (6.1) with x(0) =
x(ω). Then

So, x(t + ω) is a solution of (6.1). Moreover, x(t + ω)|t = 0 = x(t)|t=0 by


assumption, and hence, x(t) and x(t + ω) are both solutions of the IVP

By uniqueness of solutions of the IVP, x(t) ≡ x(t + ω). ☐


Corollary 6.1 Let A(t) be a continuous n × n matrix function on ℝ having
period ω. Then (6.1) has a unique solution with period ω corresponding to
each continuous n-vector f(t) on ℝ having period ω iff “1” is not a
characteristic multiplier of x′ = A(t)x (iff the only periodic solution with
period ω of x′ = A(t)x is x(t) ≡ 0, by Remark 6.7).
Proof. First, suppose that (6.1) has a unique solution with the period ω
corresponding to each continuous ω-periodic n-vector f(x). Well, f(t) ≡ 0 is
such a vector. By Remark 6.7, 1 is not a characteristic multiplier of x′ =
A(t)x, because x(t) ≡ 0 is the unique ω-periodic solution.
For the other part, assume 1 is not a characteristic multiplier of x′ =
A(t)x. Then, if X(t) is the solution of

every solution of (6.1) can be written in the form of , where x(0) = x0.
Now (6.1) has an ω-periodic solution iff x(0) = x(ω) by Theorem 6.2.
For x(0) = x(ω), we must have

Since 1 is not a characteristic multiplier and since X(ω) = C, we have X(ω)


− I is nonsingular. Therefore x0 must have the unique value

Then the unique solution x(t) is given by

Let’s examine the solution x(t) in the corollary more closely.

where

Recall that X (t1, t2) = X(t1) X−1(t2). Hence,

We can show that

In summary, (6.1) with A(t) continuous and ω-periodic has a unique


solution ω-periodic corresponding to each continuous ω-periodic f(t) iff 1 is
not an eigenvalue of X(ω) = C, where X(t) is the solution of
and this unique solution can be expressed as

where

Here, G(t, s) is called a Green’s function.


Example 6.3. Consider the scalar equation x′ = a(t)x + f(t), a(t) and f(t) are
continuous on ℝ and ω-periodic. Find the Green’s function. Here . The
solution x(t) of

is given by .
Now, if 1 is not an eigenvalue of x(ω), then .
So in the case, x′ = a(t)x + f(x) has a periodic solution for each ω-
periodic f(t) given by where

In considering a Green’s function, one can fix s and consider t-values on


both sides of s, or one can fix t and consider s-values on both sides of t.
Let us look at some properties of the above Green’s function, G(t, s).
(1) For a fixed s, 0 < s < ω, G(t, s) as a function of t is a solution of x′ =
a(t)x on [0, s] and on [s, ω]; e.g., for [0, s] and is a solution of x′ =
a(t)x and any linear combination is also a solution. Thus,

is a solution on [0, s]. Similarly on [s, ω] and t ≥ s,

is a solution. Hence, G(t, s) is a solution of x′ = a(t)x on [0, s] and on [s,


ω].
(2) For s fixed with 0 < s < ω
i.e., G (0, s) = G(ω, s).
(3) Fix s with 0 < s < ω, and let t approach s from the right (denoted s+)
and also let t approach s from the left (denoted s−). Then

In this example, since we are assuming that , it follows that no solution


x(t) of

is a nontrivial periodic solution by the corollary. Yet, the Green’s function is


almost a periodic solution of x′ = a(t)x, but from (3) above, it has a jump of
+1 at s. See Figure 6.2.

Fig. 6.2 G(t, s) has a jump of +1 at t = s.

We would next like to show the existence of solutions which are periodic
using a fixed point theorem. One of the interesting asides of this fixed point
theorem is that we can also apply it to an alternate proof of the Picard
Existence Theorem.
Theorem 6.3 (Contraction Mapping Principle). Let M,d be a complete
metric space and assume that there exist a function T: M → M and a
constant K, 0 ≤ K < 1, such that d(T(x),T(y)) ≤ K d(x, y), for each x, y ∈ M.
Then T has a unique fixed point; i.e., there exists a unique x0 ∈ M such that
T(x0) = x0.

Proof. Clearly, T is continuous. Now let z ∈ M be arbitrarily chosen and


form the sequence, z,Tz, T2z = T(Tz), T3z,…. Then,

Assume n > m > 1, then

Then, as m, n → ∞, d(Tn(z), Tm(z)) → 0, hence is a Cauchy sequence. M is


complete, thus there exists x0 ∈ M such that limn→∞ Tn(z) = x0. Since T is
continuous,

So x0 is “a” fixed point of T. For the uniqueness, let y0 also be a fixed point
of T. Then

i.e., (1 − K)d(x0, y0) ≤ 0. Since (1 − K) > 0, we have d(x0, y0) = 0.


Hence, x0 = y0. So, T has a unique fixed point x0. ☐
Corollary 6.2 Assume the conditions of the Contraction Mapping Principle,
except assume that for some m ≥ 1, Tm is a contraction; i.e., there exists 0 ≤
K < 1 such that d(Tm(x), Tm(y)) ≤ K(x, y), for all x, y ∈ M. Then T has a
unique fixed point.
Proof. In this case we consider the sequence By the Contraction Mapping
Principle, Tm has a unique fixed point x0 ∈ M, that is, Tm(x0) = x0. Then
T(Tm(x0)) = Tm(T(x0)) = T(x0). So, T(x0) is a fixed point of Tm, and so by
uniqueness of x0, we have T(x0) = x0. Hence, x0 is a fixed point of T.
If Ty0 = y0, then we have Tm (y0) = y0 and Tm(x0) = x0 and so y0 = x0.
Thus, T has a unique fixed point x0 ☐
Before presenting our alternate proof of the Picard Existence Theorem,
we define a metric on the set of continuous n-vector-valued functions on [a,
b].
Let the metric space

with metric

That M,d is a complete metric space follows from an application of the


Arzelà-Ascoli Theorem.

Theorem 6.4 (Picard Existence Theorem). Assume f(t, x) : [a, b] × ℝn →


ℝn is continuous and satisfies a Lipschitz condition ‖ f(t, x) − f(t, y)‖ ≤ K‖ x
− y ‖, for all (t, x), (t, y) ∈ [a, b] × ℝn, where K > 0. Then given (t0, x0) ∈
[a, b] × ℝn, the IVP

has a unique solution on [a, b].


Proof. Define T : M → M by

We claim that T has a unique fixed point.


Consider

Hence,

Now

and so,

So,

Now for m sufficiently large, we have Hence, for large m, Tm is a


contraction and so by Corollary 6.2, T has a unique fixed point u(t) ∈ M.
So,

which is equivalent to the statement that u(t) is a solution of

and this solution is unique since u is the unique fixed point. ☐


For our last result in this section, we are concerned with periodic
solutions for the scalar D.E. x′ = a(t)x + f(t, x). We will apply the
Contraction Mapping Principle. For this, let

with metric d(ϕ, ψ) = maxt∈ℝ |ϕ(t) − ψ (t)| = max0≤t≤w |ϕ(t) − ψ (t)|. Then
M, d is a complete metric space.
Theorem 6.5 Let a(t) ∈ M and let f(t, x) be continuous on ℝ× ℝ with f(t, x)
ω-periodic in t, for each fixed x (Note that for ϕ ∈ M, f(t, ϕ(t)) is ω-
periodic.) Assume , for all (t, x),(t, y) ∈ ℝ × ℝ, and . Then the D.E. x′ =
a(t)x + f(t, x) has a unique ω-periodic solution.
Proof. If then x′ = a(t)x does not have nontrivial ω-periodic solution.
Hence, we can form a Green’s function G(t, s).
Now define T: M → M via

Consider

So by the Contraction Mapping Principle, T has a unique fixed point ϕ such


that

which is the unique ω-periodic solution of x′ = a(t)x + f(t, x). ☐


Eexercise 36. Calculate in the case a(t) ≡ a (constant) which has any
desired period.
Chapter 7

Stability Theory

7.1 Stability of First Order Systems

We will assume that in the following definitions, we are talking about a


fixed first order system x′ = f(t, x) in which f(t, x) is continuous on [t0, +∞)
× ℝn or continuous on [t0, ∞) × B(0,R), where B(0,R) = {x ∈ ℂn | ‖x‖ < R}
(sometimes denoted BR(0)).

Definition 7.1. The solution x(t; t0, x0) is said to be stable on [t0, ∞) in case
for each ε > 0, there exists δ > 0 such that ‖x1 – x0‖ < δ implies that all
solutions x(t; t0, x1) exist on [t0, ∞) and ‖x(t; t0, x1) – x(t; t0, x0)‖ < ε on [t0,
∞).

Fig. 7.1 Stable solution.

Example 7.1 (Stability).

(1) x′ = 0. Let x(t; 0, 0) be a solution on [0, ∞). Then, x(t; 0, 0) ≡ 0. This is


clearly stable by uniqueness of solutions and the nature of the equation
x′ = 0.
But to verify the stability of x(t; 0, 0) ≡ 0, let δ = ε. Then if ‖x1 – 0‖ <
δ, the solution of x′ = 0, x(0) = x1 is given by x(t; 0, x1) = x1 and

(2) x′ = −x on [0, ∞). In this case x(t; 0, 0) ≡ 0. Let x(t; 0, x1) be any other
solution satisfying x(0) = x1. Then . Consider |x(t;0, x1)
– x(t; 0, 0)| = |x1 | e−t ≤ |x1|. Thus taking δ = ε, and the solution x(t; 0, 0)
is stable on [0, ∞).
(3) x′ = x on [0, ∞). Again, x(t; 0, 0) ≡ 0. For any and so
0 is unstable on
[0,∞).
Definition 7.2. The solution x(t; t0, x0) is said to be asymptotically stable on
[t0, ∞) in case

(1) x(t; t0, x0) is stable on [t0, ∞), and

(2) there exists η > 0 such that ‖x1 – x0‖ < η implies

i.e., the solutions approach each other asymptotically.


Example 7.2. In Example 7.1(2) for x′ = −x, the solution x(t; 0, 0) ≡ 0 is
asymptotically stable because |x1|e−t → 0, as t → ∞.

Note: The above definitions of stability and asymptotic stability are due
to Lyapunov.
Definition 7.3. The solution x(t; t0, x0) is said to be uniformly stable on [t0,
∞) in case for each ε > 0, there exists δ ε > 0 such that if t1 ≥ t0 and ‖x1 – x
(t1 ;t0,x0)‖ < δε, then ‖x(t; t1, x1) – x(t; t0, x0)‖ < ε on [t1, ∞).
Fig. 7.2 Uniformly stable solution.

Definition 7.4. The solution x(t; t0, x0) is said to be uniformly


asymptotically stable on [t0, ∞) in case
(1) x(t; t0, x0) is uniformly stable on [t0, ∞);
(2) there exists δ0 > 0 such that t1 ≥ t0 and ‖x1 – x(t1; t0, x0)‖ < δ0, imply
limt→∞‖x(t; t1, x1) – x(t; t0, x0)‖ = 0 (i.e., solutions are squeezed
together past t1), and
(3) for each ε > 0, there exists T(ε) > 0 such that t1 ≥ t0 and ‖x1 – x(t1; t0,
x0)‖ < δ0 imply ‖x(t; t1, x1) – x(t; t0, x0)‖ < ε, for t ≥ t1 + T(ε) (this says
the uniformity of the squeezing is bounded by ε, for any t past t1 +
T(ε)).
Definition 7.5. The solution x(t; t0, x0) is said to be strongly stable on [t0,
∞) in case for each ε > 0, there exists δε > 0 such that t1 ≥ t0 and ‖x1 – x(t1;
t0, x0)‖ < δε imply x(t; t1, x1) exists on [t0, ∞) and ‖x(t; t1, x1) – x(t; t0, x0)‖ <
ε on [t0, ∞).

Fig. 7.3 Strongly stable solution.

Obviously,
Exercise 37. Show that the following definition of stability is equivalent to
the given one if solutions of IVP’s for x′ = f(t, x) are unique: x(t; t0, x0) is
stable on [t0, ∞) in case given t1 ≥ t0 and given ε > 0, there exists δ (t1, ε) >
0 such that ‖x1 – x(t1; t0, x0)‖ < δ implies: x(t; t1, x1) exists on [t1, ∞) and
‖x(t; t1, x1) – x(t; t0, x0)‖ < ε on [t1, ∞).
Note: This clearly implies Definition 7.1 by taking t1 = t0. Hint for the
converse: Let xn be a sequence of tuples with xn → x(t1; t0, x0) and consider
the sequence of solutions x(t; t1, xn). Applying the Kamke Theorem, the
solutions approach x(t; t0, x0) on compact subintervals of [t0, ∞). Then apply
Definition 7.1 and the stated property in the exercise will be satisfied.
Note then that our Definition 7.1 and the statement in Exercise 37 are not
equivalent, if solutions of IVP’s are not unique to both the right and the left.
This says that our Definition 7.1 of stability gives uniqueness of solutions to
the right but not necessarily to the left.
Example 7.3 (Uniformly Asymptotically Stability). Consider the solution
x(t) ≡ 0 of x′ = −x on [0, +∞).
(1) Any solution is of the form . Hence .
Thus, taking δ = ε, x(t) ≡ 0 is uniformly stable.
(2) Let δ0 be a fixed positive number; then |x1 – x(t; 0, 0) | = |x1| < δ 0
implies lim t → ∞ |x(t; t1, x1) – x(t; 0, 0) | = lim t → ∞ |x1|e−(t −t1) = 0.
Thus, any δ0 > 0 works for this part.
(3) For this part, we proceed to find T(ε). Now
, where |x1| < δ0. (For t ≥ t1 +
T(ε), take . Then, Take , so that
if −(t ̵ t1) ≤ −T(ε), then . Therefore, for ,

Thus, x(t) ≡ 0 is uniformly asymptotically stable.


Exercise 38. Show that the solution x(t) ≡ 0 of x″ + x = 0 is strongly stable
on [0, ∞).
Convert to

and then show that not only are the x-values close to 0, but so also are the
x′-values; i.e.,

are close to 0.
Example 7.4. Consider the solution x(t) ≡ 0 of the scalar equation x′ = a(t)x.
Now .
(1) x(t) ≡ 0 is stable on [0, ∞) iff Re ≤ M(constant) on [0, ∞).
To see this, take

iff is bounded, iff ≤ M for some M > 0.


(2) x(t) ≡ 0 is asymptotically stable on [0,∞) iff .
(3) x(t) ≡ 0 is uniformly stable on [0, ∞) iff there exists M ≥ 0 such that
.

Example 7.5 (Asymptotic Stability ⇏ Uniform Stability). Consider the


solution x(t) ≡ 0 of x′ = a(t)x on [1, ∞), where a(t) = sin (ln t) + cos(ln t) – α,
where . Then

so solutions are of the form

where x(1) = x0. Note: sin(ln t) − α < 0, since .


So, |x(t)| ≤ |x0 |eα, for t ≥ 1, since α > 1. Thus the zero solution is stable
on [1, ∞). Also . Therefore the zero
solution is asymptotically stable on [1, ∞).
However, the zero solution is not uniformly stable. To see this, consider
and . By continuity we can fix and an
interval [θ1, θ2] such that sin θ + cos θ ≥ β on [θ1, θ2]. See Figure 7.4.

Fig 7.4 sin θ + cosθ ≥ β on [≥1, ≥2].

For . Then sin (ln t) +


cos(ln t) ≥ β on [t1n, t2n]. Hence , because a(t) ≥ β – α.
Therefore, as n → ∞,

This can be intuitively seen from Figure 7.5.

Fig. 7.5 Graph of function a(t).

Now if x(t) is a solution in the δε-tube at t1n, it does not stay in the ε-tube.
In fact, let x(t) be a solution such that , then for
t ≥ t1n. But from above, x(t2n) ≥ which can be
made arbitrarily large for n large enough. In particular |x(t2n) image ε
for large n and hence the zero solution is not uniformly stable. See Figure
7.6
image
Fig. 7.6

7.2 Stability of Vector Linear Systems

For our next consideration, we discuss stability concepts for a vector linear
system
image
where A(t) and f(t) are continuous on [t0, ∞). Let x(t; t0, x0) be a solution of
(7.1). This solution is stable on [t0, ∞) in case for each ε > 0, there exists δε
> 0 such that ‖x1 – x0‖ < δ implies ‖x (t; t0, x1) – x(t; t0, x0)‖ < ∊ on [t0, ∞).
Let
image
be the homogeneous system and let “x(t)” and “y(t)” denote solutions of
(7.1) and (7.2) respectively.
Note that
image
Conversely,
image
Remark 7.1. The solution x(t; t0, x0) of (7.1) is stable on [t0, ∞) iff the zero
solution of (7.2) is stable on [t0, ∞).

Proof. First, let us note that the zero solution of (7.2) is stable on [t0, +∞) in
case, given ∊ > 0, there exists δ ∊ > 0 such that ‖y1‖ < δ implies ‖y(t; t0,
y1)‖ < ∊ on [t0, +∞).
Now let us suppose first that x(t; t0, x0) is stable and let ∊ > 0 be given
and δ ∊ be the corresponding delta.
Now let y1 be such that ‖y1‖ < δ∊. Then ‖y(t; t0, y1)‖ = ‖x(t; t0, y1 + x0)
−x(t; t0, x0)‖ < ∊ by the stability of x(t1; t0, x0). Therefore, the zero solution
of (7.2) is stable.
The converse is similar. Assume the zero solution of (7.2) is stable with
∊ and δ ∊ as usual. If ‖x1 – x0‖ < δ∊, then ‖x(t; t0, x1) – x(t; t0, x0)‖ = ‖y(t;
t0, x1 – x0) < ∊ by the stability of zero solution of (7.2). So, the solution x(t;
t0, x0) is stable on [t0, ∞).

Remark 7.2. The same argument shows that for each of the other four types
of stability, a fixed solution of (7.1) has that type of stability iff the zero
solution of (7.2) has that same type of stability.
Therefore, for a fixed equation (7.1), all solutions have the same type of
stability, because of the “iff” of the stability of the zero solution of (7.2),
and the type of stability is determined by the homogeneous system (7.2).
Hence, we say that the system (7.2) is stable, is unstable, is asymptotically
stable, etc. ☐
Theorem 7.1. Let A(t) be a continuous n × n matrix function on [t0, ∞), and
let X(t) be the solution of
image
Then the system (7.2), x′ = A(t)x, is
(1) Stable on [t0, ∞) iff there exists K > 0 such that ‖X(t)‖ ≤ K on [t0, ∞).
(2) Uniformly stable on [t0, ∞) iff there exists K > 0 such that ‖X (t) X−1
(s)‖ ≤ K, for all t0 ≤ s ≤, t.
(3) Strongly stable on [t0, ∞) iff there exists K > 0 such that ‖X(t)‖ ≤ K and
‖X−1(t)‖ ≤ K on [t0, ∞).
(4) Asymptotically stable on [t0, ∞) iff limt→∞‖ X(t)‖ = 0.
(5) Uniformly asymptotically stable on [t0, ∞) iff there exist K > 0 and α >
0 such that
image.
Proof. (1) Assume that ‖X (t)‖ ≤ K. The solution x(t; t0, x0) = X(t)x0, so
‖x(t; t0, x0‖ = ‖X (t)x0‖ ≤ K‖x0‖ < ∊, provided image. The zero solution
of (7.2) is stable which is what we mean by saying the system (7.2) is
stable.
Conversely, assume x′ = A(t)x is stable on [t0, ∞). Then, for ∊ = 1, there
exists δ 0 > 0 such that ‖x0‖ < δ 0 implies ‖x(t; t0, x0)‖ < 1 on [t0, ∞). So,
‖X(t)x0‖ < 1 on [t0, ∞). Hence, image. Now let y0 be a vector such that
‖y0‖ < 1. Let x0 = δ0 y0, then ‖x0‖ < δ0. Hence, image for all t∈ [t0,
∞). By the definition of matrix norm, image. Taking image, the
converse is satisfied.
(2) Assume now that ‖X (t)X−1(s) ≤ K. Let s ∈ [t0, ∞) and ∊ > 0 be
given. Then for t ≥ s, ‖x (t; s, x0)‖ = ‖ X (t) X−1(s)x0‖ ≤ K‖x0‖ < ∊, for s ≤ t,
provided image (which is independent of s). Thus (7.2) is uniformly
stable.
Conversely, assume (7.2) is uniformly stable on [t0, ∞). Then, given ∊ >
0, there exists δ∊ > 0 such that t1 ≥ t0 and ‖x0‖ < δ imply ‖ x(t; t1, x0‖ < ∊
on [t1, ∞). Hence, ‖ X(t) X−1(t1) x0‖ < ∊ on [t1, ∞). By taking ∊ = 1, the
same argument as above can be used to show that image on [t0, ∞).
Remark 7.3. The system (7.2) is stable on [t0, ∞) iff each solution of x′ =
A(t)x is bounded on [t0, ∞).
(3) Assume ‖X (t)‖ ≤ K and ‖X−1(t)‖ ≤ K on [t0, ∞). Let x(t; t1, x0) be a
solution with x(t1) = x0. Then ‖x (t; t1, x0)‖ = ‖ X t)X−1(t1)x0‖ ≤ image.
Thus (7.2) is strongly stable on [t0, ∞).
For the converse, assume (7.2) is strongly stable on [t0, ∞). Then given
any t1 ≥ t0, and any ∊ > 0, (in particular, take ∊ = 1), there exists δ 0 > 0
such that ‖x0‖ ≤ δ 0 implies ‖x(t; t1, x0)‖ < 1 on [t0, ∞). There are two cases:
Case 1: Take t1 = t0. Then, ‖x(t; t0, x0)‖ = ‖ X(t)x0‖ < 1 on [t0, ∞) for ‖
x0‖ < δ 0. Part (1) of this theorem, this implies image.
Case 2: Take t = t0. Then,

image
for ‖x0‖ < δ 0. Then, as before image. But t1 is arbitrary, hence, image.
(4) Assume that ‖X (t)‖ → 0 as t → +∞. Since ‖X (t)‖ is continuous on
[t0, ∞), ‖X (t)‖ is bounded on [t0, ∞). By part (1) of this theorem, the
equation (7.2) is stable on [t0, ∞). Secondly, if x0 ∈ ℝn, then ‖x (t; t0, x0)‖ =
‖ X (t)x0‖ ≤ ‖ X (t)‖‖x0‖. Hence, limt→ ∞ ‖x (t; t0, x0)‖ = 0. Therefore (7.2)
is asymptotically stable on [t0, ∞).
For the converse, if (7.2) is asymptotically stable on [t0, ∞), then (7.2) is
stable, hence ‖X (t)‖ is bounded (see the remark above). Also, by the
asymptotically stability of (7.2), there exists η > 0 such that ‖x0‖ ≤ η
implies lim t→∞‖x (t; t0, x0)‖ = 0. But we don’t need η > 0 such that ‖x0‖ ≤
η; i.e., given x0 ∈ ℝn, there exists K ∈ ℝ such that image, and we also
have image. Thus image, for all x0 ∈ ℝn.
So let ∊ > 0 be given. Then for each 1 ≤ j ≤ n, there exists tj ≥ t0 such
that ‖x (t; t0, ej)‖ < ∊ on [tj, +∞), since limt→ ∞ ‖x (t; t0, ej)‖ = 0.
Let T = max1≤j ≤ n{tj}. Then for any x0 ∈ ℝn, with ‖x0‖ ≤ 1, and any t ≥
T, we have
image
Then, ‖X (t)x0‖ = ‖x (t; t0, x0) ≤ n ∊, for all t ≥ T and all ‖x0‖ ≤ 1. since x (t;
t0, x0)= X (t). So, ‖X (t)‖ ≤ n∊, for all t ≥ T. Therefore, ‖X (t)‖ → 0 as t →
∞.
(5) Assume first that image. Then by part (2) of this theorem, (7.2)
is uniformly stable on [t0, ∞). Next, given image (we use image
so that ln image).Let T image. Then, for any t1 ≥ t0 and any x1 ∈
ℝn with ‖x1‖ ≤ 1, we have that for t ≥ t1 + T (∊),

image
Note: t – t1 ≥ T (∊) implies image. Hence, ‖ x(t; t1, x1)‖ ≤ ∊. Therefore,
(7.2) is uniformly asymptotically stable on [t0, ∞).
Conversely, assume that x′ = A(t)x is uniformly asymptotically stable on
[t0, ∞). Then, there exists δ 0 > 0 such that, for each 0 < ∊ < δ0, there exists
T (∊) such that ‖x0‖ < δ 0 and t ≥ t1 + T (∊), then ‖x (t; t1, x0) ‖ < ∊. So, ‖
X(t) X −1(t1) x0‖ < ∊, for t ≥ t1 + T (∊) and ‖x0‖ < δ 0. Thus, image.
But image is an arbitrary vector with norm < 1. Consequently
image
Now x’ = A(t)x is uniformly stable on [t0, ∞), and so by part (2) of this
theorem, ‖X (t) X−1(s) ‖ ≤ K for t0 ≤ s ≤ t. So now let t ≥ t1 ≥ t0 and let the
integer m ≥ 0 be such that t1 + mT (∊) ≤ t < t1 +(m + 1)T (∊). Then,
image
So, from (7.3),
image
where α = −T (∊)−1ln θ (note θ < 1 yields ln θ < 0).
Hence, image, since t – t1 < (m + 1)αT(∊). ☐

Exercise 39. Consider the autonomous system x′ = f (x).


(i) Show that the stability of a constant solution x(t; t0, x0) = x0 on [t0, ∞)
implies the uniform stability of that same solution on [t0, ∞). (Hint: Show if
x(t) is a solution, so is x(t + s) for any fixed s. If x(t; t0, x0) is stable, then
any translate can be shown to be stable.)
Note: (ii) You can similarly show that if x(t; t0, x0) is asymptotically
stable on [t0, ∞), then it is also uniformly

Definition 7.6. Consider the autonomous system x′ = Ax. λ is an eigenvalue


of A of simple type in case it appears in the diagonal block J0 of the Jordan
form of A.
Theorem 7.2. The autonomous linear system x′ = Ax is stable iff all
eigenvalues of A have nonpositive real parts and those with zero real part
are of simple type. The system is strongly stable iff all eigenvalues of A have
zero real part and are of simple type. The system is asymptotically stable iff
all eigenvalue of A have negative real parts.
Sketch of proof. There exists a nonsingular C such that
image
where
image
Then
image
and
image
Then X(t) = etA = CetJC−1.
Entries in etA are of the form image, where pj(t) are polynomials in
t. Hence, in the presence of stability, since ‖X(t)‖ is bounded, all entries in
x(t) must be bounded. We can apply our previous theory to obtain the first
statement and this will be clearly in the “iff” sense.
For the other parts, consider X(t) = etA and X−1(t) = e−tA. Eigenvalues of
A and −A are negatives of each other. This will give eigenvalues of the zero
real-parts. ☐
Stability of the system x′ = A(t)x can also be studied using Floquet
Theory, if the system is a periodic system. So assume the system is ω-
periodic; i.e., A(t + ω) = A(t). Then by the Floquet Theory,
image
where X(t) is the solution of X' = A(t)X, X(0) = I, and Y(t) ∈ C(1)(ℝ) is
nonsingular, and ω-periodic. Hence there exists M > 0 such that ‖Y (t)‖ ≤
M, ‖Y−1(t)‖ ≤ M on ℝ.
Recall also that ωR = ln X (ω) and that the eigenvalues of R are called
the characteristic exponents of the system x′ = A(t)x.
We now consider some special pairs of inequalities.
First,
image
Next, image yield
image
Also,
image
Recall that etR is the fundamental matrix solution of
image
Via the use of the above pairs of inequalities, when compared with
Theorems 7.1 and 7.2, we have the following theorem.
Theorem 7.3. The periodic system x′ = A(t)x is stable on [0, ∞) iff all
characteristic exponents of R have nonpositive real parts and those with
zero real parts are of simple type. The system is strongly stable iff all
characteristic exponents of R have zero real parts and are of simple type.
The system is asymptotically stable iff all characteristic exponents of R have
negative real parts.
Proof. The proof is very similar to that of Theorem 7.2. ☐
Our next few remarks concern the stability of solutions of X′ = A(t)X in
terms of the coefficient matrix A(t).
Suppose A(t) is a continuous n × n matrix functions on [t0, ∞). Let X (t)
be the solution of
image
Recall that
image
Lemma 7.1. If image then the system x′ = A(t)x is unstable on [t0, ∞).
Proof. Suppose on the contrary that the system is stable on [t0, ∞). Then
there exists K > 0 such that ‖X (t)‖ ≤ K on [t0, ∞). This implies that all
entries in x(t) are bounded, hence |det X(t)| is bounded on [t0, ∞). However,

which is a contradiction. Hence, x′ = A(t)x is unstable. ☐


Lemma 7.2. Let x′ = A(t)x be a stable system on [t0, ∞). Then the system is
strongly stable on .
Proof. Assume first that the system is strongly stable on [t0, ∞). Then, there
exists K > 0 such that ‖X (t)‖ ≤ K and ‖X−1(t)‖ ≤ K on [t0, ∞). Hence, the
entries in X−1(t) are bounded on [t0, ∞). So, |det X−1(t)| is bounded on [t0, ∞)
and we know
So, is bounded on [t0, ∞). Therefore, by an argument as in
Lemma 7.1, . Hence,

For the other part, assume the system is stable and that

Then simply retrace the steps above to conclude that [det X(t)]−1 is bounded
on [t0, ∞). Also by the assumed stability of the system, ‖X(t)‖ is bounded
on [t0, ∞). So, the entries in X (t) are bounded on [t0, ∞), hence the entries in
X−1(t) are bounded on [t0, ∞). Hence, ‖X−1(t)‖ is bounded, thus the system
is strongly stable. ☐
Example 7.6.
(1) The statement of Lemma 7.1 is not “iff”.
Consider x′′ − x = 0. The corresponding system is
image
Solutions of x″− x = 0 are sinh t and cosh t which are unbounded on [0,
∞). So x″ − x = 0 is unstable. But TrA(t) = 0 yields image. Hence the
system is unstable, but image.
(2) For equations of the form x″ + a(t)x = 0, it follows from Lemma 7.2
that if the equation is stable on [t0, ∞), then it is strongly stable. Here

image
hence image. Thus the lemma is satisfied, so we have strong stability
of the system, provided the system was stable to begin with.
Chapter 8

Perturbed Systems and More on


Existence of Periodic Solutions

8.1 Perturbed Linear Systems

Consider x′ = A(t)x + f(t, x), where A(t) is an n × n matrix-valued function.


This equation can be considered as a perturbation of the linear system x′ =
A(t)x.
For
image
let x(t; t0, x0) be a solution on [t0, ∞) with x(t0) = x0. For shorthand, let x(t) ≡
x(t; t0, x0).
If y(t) is also a solution of x′ = f(t, x), then denote z(t) ≡ y(t) – x (t). Then
z(t) is a solution of z′ = y′ – x′ = f (t, y) – f (t, x) = f (t, z + x) – f (t, x); i.e.,
image
If x(t) is a solution of (8.1), then y(t) = z(t) + x is also a solution of (8.1)
iff z(t) is a solution of (8.2). Moreover, |z(t) − 0| = |y(t) − x(t)|, and so,
studying stability of solutions of (8.1) is the same as studying the stability of
the zero solution of (8.2); i.e., a solution x(t) of (8.1) has a type of stability
iff the zero solution of (8.2) has the same type of stability on [t0, ∞).
Let us examine the solution z(t) of (8.2) more closely. Consider formally
image. Then
image
Comparing this to equation (8.2), we have image. Adding and subtracting
the term fx(t, x(t)), we have

image
Thus z(t) is a solution of the perturbed linear system z′ = A(t)z + h (t, z),
where A(t) = fx(t, x(t)) and

image
Hence, we look at the stability of the zero solution of this reduced
system
image
in evaluating the stability of a solution x(t) of (8.1).
In order to make such considerations, we need to assume x(t) is a
solution of (8.1) on [t0, ∞), and f(t,x) is continuous and has continuous first
partials wrt the components of x on the tube U = {(t, x) | t ≥ t0, ‖x – x(t)‖ <
r}, where r > 0.
With these assumptions, we have
image
on each compact subinterval of [t0, ∞), since fx is uniformly continuous
there, i.e., image uniformly wrt t on each compact subinterval of [t0, ∞).
Now, in the autonomous case, if c ∈ ℂn is such that f(c) = 0, then x(t) ≡ c
is a solution of x′ = f(x). In this case, the perturbed system (8.3) becomes z′
= fx(c)z + h(z), where fx(c) is a constant matrix and image. Hence,
image uniformly for all t (since no t appears).
For yet another observation, if f(t, x) has period ω in t; i.e., f(t +ω, x) =
f(t, x), and if x(t) is a periodic solution of x′ = f(t, x) with period ω on the
real line, and if f(t, x) is continuous and has continuous first partials wrt the
components of x on the tube U, the equation (8.3) becomes z′ = A(t)z + h(t,
z), where A(t) = fx (t, x(t)) is periodic with periodic ω, and h(t + ω, z) = h(t,
z). From the formula and periodicity of h(t, z), we have image
uniformly on each compact subinterval of (–∞, ∞); because of the
periodicity, we need consider the limit only on [0, ω] which is compact.
Thus image uniformly on (-∞, ∞).
Theorem 8.1. Assume that in the equation
image
A(t) is a continuous n × n matrix function on [t0, +∞), and assume that (8.4)
is uniformly stable on [t0, +∞). In the perturbed system
image
assume that f(t,x) is continuous on [t0, ∞) × Br(0), that f (t, 0) ≡ 0 on [t0, ∞),
and that ‖f(t, x)‖ ≤ γ(t)‖x‖ on [t0, ∞) × Br(0), where γ(t) is integrable on [t0,
t1], for each t1 > t0, and image Then there exists L > 1 such that, for each
t1 ≥ t0 and x0 with ‖x0‖ < L−1r, the solution x(t; t1, x0) of (8.4) exists on [t1,
∞) and satisfies ‖x(t; t1, x0)‖ ≤ L‖ x0‖ on [t1, ∞).

Proof. Assume x(t; t1, x0) is a solution of (8.5) with t1 ≥ t0 and ‖x0‖ < L−1r.
We will find an expression for L which satisfies the conditions of the
theorem.
For notation, let x(t) ≡ x(t; t1, x0), and let [t1, ω) be the right maximal
interval of existence of x(t). Thus on [t1, ω), x(t) is a solution of x′ = A(t)x +
f(t, x(t)). Now f(t, x(t)) is just a function of t, hence by the Variation of the
Constants Formula,
image
where X(t) is the solution of X′ = A(t)X, X (t0) = I. Note: X (t)X−1(t1) is the
solution of X′ = A(t)X, X (t1) = I.
Now by the uniform stability of (8.4), there exists K > 0 such that ‖X(t)‖
≤ K and ‖X (t) X−1(s) ‖≤K, t0 ≤ s ≤ t < ∞. Hence, for each image.
Applying the Gronwall Inequality, we have
image
where image. Now this puts a “lid” on x(t). Since [t1, ω) is right maximal,
recall that x(t) → ∂ D as t → ω, where D = [t1, ω) × Br(0), but this condition
puts a “lid” on that. See Figure 8.1.
Now let h > 1 be fixed and set L ≡ max{h, L*}. Assume ‖x0‖ < L−1r; in
fact, let r0 = L‖ x0‖, then, ‖x0‖ = L−1r0 < L−1r. Hence, r0 < r. Then we have
‖x(t)‖ ≤ L*‖x0‖ ≤ L‖x0‖ = r0 on [t1, ω).
We claim that ω = +∞: Choose image such that r0 image < r
and assume ω < ∞. Now the set image (compact) ⊆ [t0, ∞) × Br (0).
Now f(t,x) is continuous on [t0, ∞) × Br(0). By Chapter 2, the solution
approaches the boundary as t → ω, hence must leave the compact set. So,
there exists τ with t1 < τ < ω such that image for all τ < t < ω. Now t, τ
∊ [t1, ω), hence the graph leaves at the “top” of the compact set, i.e., ‖x(t)‖
> image > r0, for τ < t < ω. But ‖x(t)‖ ≤ L‖ x0‖ = r0 < image, for
all t1 ≤ t < ω, which is a contradiction. Thus, ω = +∞.

image

Fig. 8.1 The solution must leave each compact set (as t → ω), i.e., x(t)
leaves the top of the small tube, but this condition puts a lid on that.

Then it follows that ‖x(t)‖ ≤ L‖ x0‖ on [t1, ∞), for all t1 ≥ t0. ☐

Corollary 8.1. If the hypotheses of Theorem 8.1 are satisfied, then the
solution x(t) ≡ 0 of (8.5) is uniformly stable on [t0, ∞).

Proof. Obviously, x(t) ≡ 0 is a solution since f(t, 0) = 0.


Given ε > 0 and t1 > t0, choose image. By Theorem 8.1, ‖x(t)‖ ≤ L‖
x0 ‖ < ε. Hence, the zero solution is uniformly stable. ☐

Corollary 8.2. Assume again the hypotheses of Theorem 8.1 and in addition
that the system x′ = A(t)x is asymptotically stable on [t0, ∞). Then the zero
solution of (8.5) is both uniformly stable and asymptotically stable on [t0,
∞).
Proof. By Corollary 8.1, the zero solution is uniformly stable.
Now let x0 ∊ ℂn and satisfy ‖x0‖ < L −1r where L is as in Theorem 8.1.
Then x(t)= x(t; t0, x0) exists on [t0, ∞) by Theorem 8.1 and

image
where X(t) is the solution of X′ = A(t)X, x(t0) = I. For t ≥ t1 > t0, we can
write
image
Upon applying norm inequalities and using ‖X(t)X−1(s)‖ ≤ K and ‖x(t)‖ ≤
L‖ x0‖ on t0 ≤ s ≤ t < ∞, we have

image
Given ε > 0, since image, we can choose t1 > t0 such that

image
From the asymptotic stability of (8.4), ‖X(t)‖ → 0 as t → ε. Hence, we can
choose t2 > t1 such that, for t ≥ t2, we have

image
Therefore, from above, for t ≥ t2, ‖x(t)‖ < ∊. Hence, limx→∞‖x(t)‖ = 0.
Therefore, the zero solution of (8.5) is asymptotically stable. ☐
Corollary 8.3. Assume that A(t) and B(t) are continuous n × n matrix
functions on [t0, ∞) and assume that the system x′ = A(t)x is uniformly stable
(and asymptotically stable), and assume that image. Then the system
image
is uniformly stable (and asymptotically stable) on [t0, ∞).

Proof. Let f(t, x) = B(t)x and γ(t) = ‖B(t)‖. The conclusion follows from
above. ☐
Example 8.1. (1) The D.E.
image
is uniformly stable on [0, +∞). This is true, because all solutions are linear
combinations of sin t and cos t.
Consider the perturbed system
image
Let
image
then system (8.7) is
image
and the perturbed system (8.8) is given by
image
Since (8.7) is uniformly stable, (8.8) is also uniformly stable on [0, +∞)
if image.
(2) Let us examine the function γ(s) in Theorem 8.1.
Now image is a growth condition on γ as t → ∞. It is also the case
that “γ(t) → 0 as t → ∞ is a type of growth condition.”
Is it possible that image and yet image?
Consider the function e-t in Figure 8.2. Now alter this function by adding
spines that become more and more narrow, but taller, where say γ(n) = n,
yet image. The answer to the question is in the affirmative. See Figure
8.3.
image

Fig. 8.2 The graph of x = e−t.

(3) In light of (2), let’s consider the following example. That is, we look
at an example of where we replace in Theorem 8.1, image, by γ(t) → 0
as t → ∞. The resulting problem will not be stable.

Fig. 8.3 The alternation of x = e−t.

Let h(t) be a differentiable function on [1, +∞), and define φ(t) ≡


inline and define x(t) ≡ φ(t)cos t.
Now x(t) is a solution of x″ + (1 + b(t))x = 0, where b(t) = 3h(t) sin t − h′
(t) cos t − h2(t)cos2t.
In particular, let’s take inline, where a > 0 is constant.
Then
where β(t) is the second term in the sum. So we can write the D.E. satisfied
by x(t) as

which can be thought of as a perturbation of

Also, inline on [1, +∞), hence inline.


By Theorem 8.1, (8.10) is uniformly stable on [1, +∞), then (8.9) is also
uniformly stable on [1, +∞).
On the other hand, (8.10) is a perturbation of x″ + x = 0 with perturbing
term inline, so that in our theorem, inline on [1, ∞). Thus γ(t) →
0, as t → ∞. Furthermore, x″ + x = 0 is uniformly stable on [1, ∞).
Therefore, if Theorem 8.1 were true with inline replaced by γ(t) → 0
as t → ∞, then (8.10) would be uniformly stable on [1, ∞), hence so also
would be (8.9).
But, in fact, this is not the case. If we look at the solution x(t) of (8.9);
i.e., inline, we see that x(t) is unbounded. So, (8.9) is unstable.
In conclusion, Theorem 8.1 is false if the integral condition is replaced
by γ(t) → 0, as t → +∞.

Definition 8.1. A matrix P ∊ Mn is called a projection in case P2 = P. In


this case, if x ∊ Range(P) and y ∊ ℂn is such that Py = x, then Px = P2y = Py
= x. Moreover, ℂn = [(I − P)ℂn] ⊕ [Pℂn]; i.e., any z ∊ ℂn can be written as
z = x + y, where x ∊ [I − P]ℂn and y ∊ Pℂn, and [(I − P) ℂn] ∩ [Pℂn] =
{0}.
Note also that I − P is a projection, so z = (I − P) z + Pz.
Lemma 8.1. Let Y(t) be a continuous n × n matrix function on [t0, ∞) and
assume Y (t) is nonsingular for all t ≥ t0. Assume also that P is a projection
such that there exists K > 0 with inline, for all t ≥ t0. Then, there exists N
> 0 such that inline on [t0, ∞).

Proof. If P = 0, the assertion is trivial. So assume P ≠ 0. Then ‖ Y (t)P‖ ≠ 0


on [t0, ∞). Define φ (t) = ‖ Y (t)P‖-1 which is continuous on [t0, ∞).
Consider
since P2 = P.
From inline, we have

that is,

Set inline, that is, inline. Hence, for any inline. Thus,
inline. From (8.11), inline, we have

Therefore,

Let inline Then inline.


Let inline. Then inline. ☐
Note: If P = I and Y (t) is continuous and nonsingular on [t0, ∞) and if for
some inline, on [t0, ∞), then inline, for some N > 0.
In particular, if Y (t) = X(t) is the solution of

and if the condition is satisfied, then the system is asymptotically stable,


since inline.
Theorem 8.2. Assume that there exists K > 0 such that inline, for all t0 ≤ t
< ∞, where X (t) is the solution of X′ = A(t) X, X (t0) = I. Assume that f(t, x)
is continuous for (t, x) ∊ [t0, ∞) × Br (0) and that ‖f (t, x)‖ ≤ γ‖x‖, where
inline. Then the zero solution of

is asymptotically stable on [t0, ∞).

Proof. Let ‖x0‖ > r and x(t) ≡ x (t; t0, x0) be a solution of (8.12).
Let [t0, ω) be its right maximal interval of existence. Then

It follows from inline and Lemma 8.1 that inline. Hence, ‖X(t)‖ ≤ M on
[t0, ∞) for some M > 0. Then

By continuity, there exists t0 ≤ τt ≤ t such that ‖x(τt)‖ = inline.


Hence

So,

Therefore,

which is independent of t. Hence inline, on [t0, ω), and from this, we


conclude that if ‖x0‖ is chosen to be any fixed number such that inline,
then for the solution x(t; t0, x0), ω = +∞. To see this, take inline, where 0
< r0 < r and pick r0 so that inline. Then ‖x(t)‖ ≤ r0 < r on [t0,ω).
Therefore, ω = +∞ as we have seen before. Thus, if inline, then x(t; t0, x0)
exists on [t0, ∞) and inline. This implies the solution x(t) ≡ 0 of (8.12) is
stable.
To complete the proof, define inline and choose θ such that γK < θ
< 1.
Assume μ > 0. Then since θ – 1μ > μ, by the definition of “limsup”, there
exists t1 > t0 such that ‖x(t)‖ < θ−1μ, for all t ≥ t1.
Let x(t; t0, x0) be a solution of (8.12) with the initial conditions, where
inline. Then with this solution,

with inequality strict by choice of θ.


Letting t → ∞, since ‖X(t)‖ → 0, we have inline (note: γK < θ
yields γKθ−1 < 1).
This is a contradiction. Therefore μ = 0, so that inline, or
limt→∞‖X(t)‖ = 0. Hence the zero solution of (8.12) is asymptotically
stable. ☐
Exercise 40. Assume that x′ = A(t) x + f (t, x) satisfies the hypotheses of
Theorem 8.2, and assume that the function b: [t0, ∞) → ℂn is continuous
and ‖b(t)‖ → 0, as t → +∞. Prove that there exist t1 ≥ t0 and δ0 > 0 such that
any solution x(t; t1, x1) of x′ = A(t)x + f (t, x) + b(t) with ‖x1‖ < δ0 exists on
[t1, ∞) and ‖x(t; t1, x1)‖ → 0, as t → +∞.

We now make the observation that the integral condition in Theorem 8.2
is stronger than asymptotically stability on the unperturbed system and
weaker than uniformly asymptotically stability on the unperturbed system.
For example, suppose x′ = A(t)x is uniformly asymptotically stable on [t0,
∞). Then ‖X(t)X−1(s)‖ ≤ Ke−α(t−s), t0 ≤ s ≤ t < ∞, where K, α are positive
constants. Then

Thus, the integral condition is weaker than uniformly asymptotically


stability of x′ = A(t)x.
To see that the condition is strictly stronger than asymptotically stability
of the unperturbed system, we proceed with an example: consider
inline, with L.I. solutions inline and inline.
The first order system is

Exercise 41. Show that the zero solution is both uniformly stable and
asymptotically stable on [1, ∞).
Now consider the equation inline, with L.I. solutions sin t − t cos t,
cos t + t sin t.
The corresponding perturbed system is
and inline on [1, ∞), where

From the solutions sin t − t cos t, cos t + t sin t, clearly the zero solution
of the perturbed system is not asymptotically stable, whereas in the
exercise, the unperturbed system is asymptotically stable. Therefore, the
integral condition in Theorem 8.2 is strictly stronger than asymptotically
stability of the unperturbed system.
Theorem 8.3. Assume that the unperturbed system (8.1), x′ = A(t)x, is
uniformly asymptotically stable on [t0, ∞), and let K > 0, α > 0 be such that
‖X (t) X−1(s)‖ ≤ Ke−α(t−s), for t0 ≤ s ≤ t < ∞, where X(t) is the solution of

Assume f(t, x) is continuous on [t0,∞) × Br(0), for some r > 0, and satisfies
‖f(t, x)‖ ≤ γ‖x‖ for constant γ, with inline. Then every solution x(t; t0, x0)
of (8.2), x′ = A(t)x + f (t, x) with ‖x0‖ < min {K−1r, r} exists on [t0, ∞) and
satisfies

for all t0 ≤ s ≤ t < ∞, where β = α − γK > 0.

Proof. Let ‖x0‖ < min {K−1r, r} and let [t0, ω) be the maximal interval of
x(t) ≡ x(t; t0, x0), a solution of (8.2). Then, for any t0 ≤ t1 ≤ t < ω,

We can replace s in the hypothesis by t1. Then

Multiplying by eαt, we have

and then by the Gronwall inequality,


Multiplying now by e−αt, we have

Now we can choose t1 = t0 so that

By our choice of ‖x0‖, we have K‖x0‖ < r, thus ‖x(t)‖ < r, on [t0, ω). It
follows as in previous results that ω = +∞. Hence from above, we have

Corollary 8.4. If the hypotheses of Theorem 8.3 are satisfied, then the
solution x(t) ≡ 0 of (8.2) is uniformly asymptotically stable.
Exercise 42. Prove Corollary 8.4.
Corollary 8.5. Assume that A(t), B(t) are continuous n × n matrix functions
on [t0,∞) and that x′ = A(t)x is uniformly asymptotically stable on [t0, ∞)
and that ‖B(t)‖ → 0, as t → θ.
Then the system x′ = (A(t) + B(t))x is uniformly asymptotically stable on
[t0, ∞); i.e., the zero solution is uniformly asymptotically stable.

Exercise 43. Prove Corollary 8.5.


Hint: If C(t) is a continuous n × n matrix function on [t0, ∞) and if for
some t1 > t0, x′ = C(t)x is uniformly asymptotically stable on [t1, ∞), then x′
= C(t)x is uniformly asymptotically stable on [t0, ∞). Use ‖B(t)x‖ ≤
‖B(t)‖‖x‖ ≤ r‖x‖, if t1 is large. Then the conclusions of Theorem 8.3,
Corollary 8.1, and the previous exercise can be used.
Bibliography
Birkhoff, G. and Rota, G. C. (1989). Ordinary Differential Equations, 4th
edn. (John Wiley & Sons, New York).

Brauer, F. and Nohel, J. A. (1969). The Qualitative Theory of Ordinary


Differential Equations: An Introduction (W. A. Benjamin, New York).

Cesari, L. (1971) Asymptotic Behavior and Stability Problems in Ordinary


Differential Equations, 3rd edn., Ergebnisse der Mathematik und ihrer
Grenzgebiete, Band 16 (Springer-Verlag, New York).

Coddington, E. A. and Levinson, N. (1955). Theory of Ordinary


Differential Equations (McGraw-Hill, New York), p. 54.

Coppel, W. A. (1965). Stability and Asymptotic Behavior of Differential


Equations (D. C. Heath and Co., Boston).

Hale, J. K. (1980). Ordinary Differential Equations, 2nd edn. (Robert E.


Krieger Publishing Co., Huntington).

Hartman, P. (1964). Ordinary Differential Equations (John Wiley & Sons,


New York), p. 41.

Hartman, P. (2002). Ordinary Differential Equations, Classics in Applied


Mathematics, Vol. 38 (SIAM, Philadelphia).

Kelley, W. G. and Peterson, A. C. (2010). The Theory of Differential


Equations: Classical and Qualitative, 2nd edn., Universitext (Springer,
New York).

Leighton, W. (1976). An Introduction to the Theory of Ordinary Differential


Equations (Wadsworth Publishing Company, Belmont, CA).

Markus, L. (1955). Continuous matrices and the stability of differential


system, Math. Zeitsch 62v, pp. 310-319.
Reid, W. T. (1971). Ordinary Differential Equations (John Wiley & Sons,
New York).

Sanchez, D. A. (1979). Ordinary Differential Equations and Stability


Theory: An Introduction (Dover Publications, Inc., New York), Reprint of
the 1968 original.
Index

A
adjoint system, 94
asymptotically stable, 136

B
basis, 80

C
Cauchy function, 99
characteristic exponents, 123
characteristic multipliers, 123
characteristic polynomial, 104
classical solution, 1
comparison theorem, 65
complete metric space, 84
continuation of a solution, 29
continuity of solutions with respect to parameters, 45
continuous dependence of solutions on initial conditions, 42, 44
Contraction Mapping Principle, 130
D
differentiation with respect to initial conditions, 51
differentiation with respect to parameters, 55
Dini derivatives, 65

E
eigenvalue, 103
eigenvector, 103

F
first variational equation, 57
Floquet’s theorem, 121
fundamental matrix solution, 94

G
Gronwall inequality, 12

H
Hill’s equation, 124

I
initial value problem, 2
inner product, 83
J
Jordan canonical form, 111

K
Kamke convergence theorem, 37
Kamke uniqueness theorem, 73
kinematically similar matrices, 122

L
left maximal interval of existence, 31
linear matrix systems, 88
linear systems, 77
linearly independent, 80
Lipschitz condition, 4
logarithm of a matrix, 116

M
maximal interval of existence, 31
maximal solution, 61
metric space, 84
minimal solution, 61

N
Nagumo uniqueness result, 74
norm, 3
null space, 83
P
Peano existence theorem, 18
periodic matrix, 121
Picard existence theorem, 7, 132
Picard iterates, 8
projection, 156

R
range space, 83
Riemann integrable matrix, 87
right maximal interval of existence, 31

S
series of matrices, 85
similar matrices, 111
simple type eigenvalue, 145
solution space, 80
span, 80
stable, 47, 135, 140
strongly stable, 137

U
uniformly asymptotically stable, 137
uniformly stable, 136

V
variation of constants formula, 95

You might also like