Lecture Notes in Mathematical Biology-E.D.sontag
Lecture Notes in Mathematical Biology-E.D.sontag
Lecture Notes in Mathematical Biology-E.D.sontag
c
Eduardo D. Sontag, Rutgers University,
2005,2006
These notes were prepared for Math 336, Dynamical Models in Biology (formerly Differential Equations in Biology), a junior-level course designed for Rutgers Biomathematics undergraduate majors
and attended as well by math, computer science, genetics, biomedical engineering, and other students.
Math 336 does not cover discrete and probabilistic methods (genetics, DNA sequencing, protein alignment, etc.), which are the subject of a companion course.
The pre-requisites for Math 336 are four semesters of calculus, up to and including sophomore ordinary differential equations, plus an introductory linear algebra course. Students should be familiar
with basic qualitative ideas (phase line, phase plane) as well as simple methods such as separation of
variables for scalar ODEs.
However, it may be possible to use these notes without the ODE and linear algebra prerequisites,
provided that the student does some additional reading.
The companion website
http://www.math.rutgers.edu/ sontag/336.html
is an integral part of these notes, and should be consulted for many additional homework problems
(and answers), computer exercises, and other information.
An introduction to basic concepts in molecular biology can be found in that website as well.
The organization and much of the material were heavily inspired by Leah Keshets beautiful book
Mathematical Models in Biology, McGraw-Hill, 1988, as well as other sources, but there is a little
more of an emphasis on systems biology ideas and less of an emphasis on traditional population
dynamics and ecology. Topics like Lotka-Volterra predator-prey models are assumed to have been
covered as examples in a previous ODE course.
The material in principle fits in a 1-semester course (but in pratice, due to time devoted to exam
reviews, working out of homework problems, quizzes, etc., the last few topics may not all fit), and the
goal was to provide students with an overview of the field. With more time, one would include much
other material, such as Turing pattern-formation and detailed tissue modeling.
The writing is not textbook-like, but is telegraphic and streamlined. It is suited for easy reading and
review, and is punctuated so as to make it easy for instructors to use directly during class, with no
need to produce a separate outline.
Please note that many figures are scanned from books or downloaded from the web, and their copyright belongs to the respective auhtors, so please do not reproduce.
Please address comments, suggestions, and corrections to the author, whose email address can be
found in the above website.
These notes will be continuously revised and updated. This is version 2.2 (some typos fixed April/June
2008).
Contents
1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
10
1.8
Michaelis-Menten Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
1.9
11
11
13
2.1
Steady States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2
Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.3
16
2.4
18
19
3.1
19
3.2
Compartmental Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
21
4.1
21
4.2
21
4.3
Nullclines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
4.4
Global Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
27
5.1
Analysis of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
5.2
Interpreting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
5.3
Nullcline Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
5.4
Immunizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
5.5
A Variation: STDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
Chemical Kinetics
33
6.1
Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
6.2
Chemical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
6.3
40
6.4
Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
6.5
43
6.6
44
6.7
47
6.8
49
6.9
Inhibition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
51
6.11 Cooperativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
Multi-Stability
55
7.1
55
7.2
57
7.3
58
Periodic Behavior
66
8.1
67
8.2
67
8.3
Poincare-Bendixson Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
8.4
70
8.5
Bendixsons Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
8.6
Hopf Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
8.7
77
8.8
78
8.9
Neurons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
80
81
8.11 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
PDE Models
87
9.1
Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
9.2
88
9.3
88
9.4
Transport Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
9.5
92
9.6
Attraction, Chemotaxis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
4
100
114
11.1 Steady State for Laplace Equation on Some Simple Domains . . . . . . . . . . . . . 115
11.2 Steady States for a Diffusion/Chemotaxis Model . . . . . . . . . . . . . . . . . . . . 118
11.3 Facilitated Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
11.4 Density-Dependent Dispersal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
12 Traveling Wave Solutions of Reaction-Diffusion Systems
123
1
1.1
Let us start by reviewing a subject treated in the basic differential equations course, namely how one
derives differential equations for simple exponential growth,
Suppose that N (t) counts the population of a microorganism in culture, at time t, and write the
increment in a time interval [t, t + h] as g(N (t), h), so that we have:
N (t + h) = N (t) + g(N (t), h) .
(The increment depends on the previous N (t), as well as on the length of the time interval.)
We expand g using a Taylor series to second order:
g(N, h) = a + bN + ch + eN 2 + f h2 + KN h + cubic and higher order terms
(a, b, . . . are some constants). Observe that
g(0, h) 0 and g(N, 0) 0 ,
since there is no increment if there is no population or if no time has elapsed. The first condition tells
us that
a + ch + f h2 + . . . 0 ,
for all h, so a = c = f = 0, and the second condition (check!) says that also b = N = 0.
Thus, we conclude that:
g(N, h) = KN h + cubic and higher order terms.
So, for h and N small:
N (t + h) = N (t) + KN (t)h ,
which says that
the increase in population during a (small) time interval
is proportional to the interval length and initial population size.
This means, for example, that if we double the initial population or if we double the interval,
the resulting population is doubled.
Obviously, (1) should not be expected to be true for large h, because of compounding effects.
It may or may not be true for large N , as we will discuss later.
We next explore the consequences of assuming Equation (1) holds for all small h>0 and all N .
As usual in applied mathematics, the proof is in the pudding:
one makes such an assumption, explores mathematical consequences that follow from it,
and generates predictions to be validated experimentally.
If the predictions pan out, we might want to keep the model.
If they do not, it is back to the drawing board and a new model has to be developed!
(1)
1.2
1
(N (t + h) N (t))
h
Taking the limit as h 0, and remembering the definition of derivative, we conclude that the rightdN
hand side converges to
(t). We conclude that N satisfies the following differential equation:
dt
KN (t) =
dN
= KN .
dt
(2)
Bacterial populations tend to growth exponentially, so long as enough nutrients are available.
1.3
Suppose now there is some number B (the carrying capacity of the environment) so that
populations N > B are not sustainable, i.e.. dN/dt < 0 whenever N = N (t) > B:
It is reasonable to pick the simplest function that satisfies the stated requirement;
in this case, a parabola:
dN
N
= rN 1
(for some constant r > 0)
dt
B
(3)
where C = C(t) denotes the amount of the nutrient, which is depleted in proportion to the population
change: 1
dN
dC
=
= KN
dt
dt
1
if N (t) counts the number of individuals, this is somewhat unrealistic, as it the ignores depletion of nutrient due to
the growth or individuals once they are born; it is sometimes better to think of N (t) as the total biomass at time t
1.4
dN
We solve
= rN
dt
N
1
B
N (B N )
using again the method of separation of variables:
B
Z
Z
B dN
= r dt .
N (B N )
= r
N (t) =
N0 B
N0 + (B N0 )ert
(# individuals and volume of P. caudatum and P. aurelia, cultivated separately, medium changed daily,
25 days.)
1.5
and t = t t,
Lets follow the above procedure with our example. We start by writing: N = N N
where stars indicate new variables and the hats are constants to be chosen.
d N N
C0
dN
= C0 N N N N ;
= tN
N N
dt
d t t
N
) N
dN
d(N N
(t )= 1 N (t t) dN (t )= 1 t dN (t t)
=
We used dN
=
,
which
is
justified
by
the
chain
rule:
N
dt
dt
dt
t dt
d(t t)
N
= 1.
Look at this last equation: wed like to make CN0 = 1 and tN
:= C0 and t = 1 , that is: t := 1
But this can be done! Just pick: N
C0
N
;
dN
= (1 N ) N
dt
dN
= (1 N ) N
dt
but we should remember that the new N and t are rescaled versions of the old ones
N (tt ) = C0 N 1 t .
In other words, N (t) = N
C0
We may solve the above equation and plot, and then the plot in original variables can be seen as a
stretching of the plot in the new variables.
(We may think of N , t as quantity & time in some new units of measurement. This procedure is
related to nondimensionalization of equations, which well mention later.)
1.6
inflow
- F
nutrient supply
N (t), C(t)
culture chamber
= K(C(t)) N (t) t
= [N (t + t) N (t)]
1.7
10
and
1.8
= K(C)N N F/V
= K(C)N CF/V + C0 F/V
Michaelis-Menten Kinetics
A reasonable choice for K(C) is as follows (later, we come back to this topic in much more detail):
K(C) =
kmax C
kn + C
Vmax C
.
Km + C
Vmax C
Km
11
Note that when C = Km , the growth rate is 1/2 (m for middle) of maximal, i.e. Vmax /2,
We thus have these equations for the chemostat with MM Kinetics:
dN
dt
dC
dt
kmax C
N (F/V )N
kn + C
kmax C
=
N (F/V )C + (F/V )C0
kn + C
Our next goal is to study the behavior of this system of two ODEs
for all possible values of the six parameters kmax , kn , F, V, C0 , .
1.9
1
Km + C
1
Km
1
=
=
+
K(C)
Vmax C
Vmax
Vmax C
1.10
N = N N
, t = t t, and substitute:
Following the procedure outlined earlier, we write: C = C C,
)
d(N N
kmax C C
=
N N (F/V )N N
d(t t)
kn + C C
d(C C)
kmax C C
=
)
d(N N
d(t t)
dN
N
t dt
&
dC
dt
d(C C)
d(t t)
dC
C
t dt
tkmax C C tF
dN
=
N
N
dt
V
kn + C C
dC
tkmax C tF
=
C +
N N
dt
V
kn + C C
or equivalently:
tF
C0
CV
12
tF
dN
C
kmax )
=
(
t
N
N
dt
V
kn /C + C
!
tF
dC
C
tF
tkmax N
C0
=
C
+
N
dt
V
kn /C + C
C
CV
tkmax N
tF
= 1, and
= 1. This can indeed be
V
C
V
kn
kn F
C
:=
t := , and N
=
=
F
V kmax
tkmax
tkmax
V kmax
C
dN
=
N N
;
dt
F
1 + C
dC
C
C0
=
N C +
dt
1+C
kn
C0
V kmax
and 2 =
we end up with:
or, dropping stars and introducing two new constants 1 =
F
kn
dN
dt
dC
dt
C
N N
1+C
C
=
N C + 2
1+C
= 1
We will study how the behavior of the chemostat depends on these two parameters, always remembering to translate back into the original parameters and units.
The old and new variables are related as follows:
kn F
V
N (t) = N N (tt ) =
N
t ,
V kmax
F
Additional homework problem: show that with t =
to two parameters.
1
,
kmax
(tt ) = kn C
C(t) = CC
C =
tF C0
,
V
V
t
F
Remark on units
Since kmax is a rate (obtained at saturation), it has units time1 ; thus, 1 is dimensionless.
Similarly, kn has units of concentration (since it is being added to C, and in fact for C = kn we obtain
half of the max rate kmax ), so also 2 is dimensionless.
Dimensionless constants are a nice thing to have, since then we can talk about their being small or
large. (What does it mean to say that a person of height 2 is tall? 2 cm? 2in? 2 feet? 2 meters?) We
do not have time to cover the topic of units and non-dimensionalization in this course, however.
13
2.1
Steady States
The key to the geometric analysis of systems of ODEs is to write them in vector form:
dX
= F (X)
dt
The vector X = X(t) has some number n of components, each of which is a function of time.
One writes the components as xi (i = 1, 2, 3, . . . , n), or when n = 2 or n = 3 as x, y or x, y, z,
or one uses notations that are related to the problem being studied,
like N and C for the number (or biomass) of a population and C for the concentration of a nutrient.
For example, the chemostat
dN
C
= 1
N N
dt
1+C
C
dC
=
N C + 2
dt
1+C
dX
f (N, C)
may be written as
= F (X) =
, provided that we define:
g(N, C)
dt
C
N N
1+C
C
g(N, C) =
N C + 2 .
1+C
f (N, C) = 1
C
N N = 0
1+C
C
N C + 2 = 0 .
1+C
the word equilibrium is used in mathematics as a synonym for steady state, but the term has a more restrictive
meaning for physicists and chemists
14
= (N
, C),
= 0 or 1 C = 1 .
either N
1 + C
We consider each of these two possibilities separately.
= 0. Since also it must hold that
In the first case, N
C
N C + 2 = C + 2 = 0 ,
1 + C
1
1 1
(check!).
1
1
2
1 1
1
,
1 1
.
0. Negative
However, observe that an equilibrium is physically meaningful only if C 0 and N
populations or concentrations, while mathematically valid, do not represent physical solutions.4
The first steady state is always well-defined in this sense, but not the second.
2 is well-defined and makes physical sense only if
This equilibrium X
1 > 1 and 2 >
1
1 1
(4)
or equivalently:
1 > 1 and 2 (1 1) > 1 .
(5)
Reducing the number of parameters to just two (1 and 2 ) allowed us to obtain this very elegant and
compact condition. But this is not a satisfactory way to explain our conclusions, because 1 , 2 were
only introduced for mathematical convenience, but were not part of the original problem.
kmax >
CV
F
V
and
C0 >
kn
kn
V
F
kmax 1
The first condition means roughly that the maximal possible bacterial reproductive rate is larger than
the tank emptying rate, which makes intuitive sense. As an exercise, you should similarly interpret
in words the various things that the second condition is saying.
is an equilibrium, then the constant vector X(t) X
is a soluMeaning of Equilibria: If a point X
=0
tion of the system of ODEs, because a constant has zero derivative: dX/dt
= 0, and since F (X)
2.2
15
Linearization
We wish to analyze the behavior of solutions of the ODE system dX/dt = F (X) near a given steady
For this purpose, it is convenient to introduce the displacement (translation) relative to X:
state X.
=X X
X
We have:
and to write an equation for the variables X.
dX
dX
dX
dX
dX
X
+ o(X)
AX
+ X)
= F (X)
+F 0 (X)
=
=
0 =
= F (X
| {z }
| {z }
dt
dt
dt
dt
dt
=0
where A = F 0 (X)
because we are only interested in X
0
We dropped higher-order-than-linear terms in X
0
(1 1)
(1 1) + 1
1
1
1
where we used the shorthand: = 2 (1 1) 1. (Prove this as an exercise!)
Remark. An important result, the Hartman-Grobman Theorem, justifies the study of linearizations.
look
= F (X) in the vicinity of the steady state X
It states that solutions of the nonlinear system dX
dt
qualitatively just like solutions of the linearized equation dX/dt = AX do in the vicinity of the
point X = 0.5
For linear systems, stability may be analyzed by looking at the eigenvalues of A, as we see next.
5
The theorem assumes that none of the eigenvalues of A have zero real part (hyperbolic fixed point). Looking like
is defined in a mathematically precise way using the notion of homeomorphism which means that the trajectories look
the same after a continuous invertible transformation, that is, a sort of nonlinear distortion of the phase space.
2.3
16
For the purposes of this course, well say that a linear system dX/dt = AX, where A is n n matrix,
is stable if all solutions X(t) have the property that X(t) 0 as t . The main theorem is:
stability is equivalent to: the real parts of all the eigenvalues of A are negative
For nonlinear systems dX/dt = F (X), one applies this condition as follows:6
compute A, the Jacobian of F evaluated at X,
and test its eigenvalues.
For each steady state X,
If all the eigenvalues of A have negative real part, conclude local stability:
converges to X
as t .
every solution of dX/dt = F (X) that starts near X = X
If A has even one eigenvalue with positive real part, then the corresponding nonlinear system
meaning that at least some solutions that start near X
to say, about behaviors of dX/dt = F (X) that start at initial conditions that are far away from X.
3
2
For example, compare the two equations: dx/dt = x x and dx/dt = x + x .
In both cases, the linearization at x = 0 is just dx/dt = x, which is stable.
In the first case, it turns out that all the solutions of the nonlinear system also converge to zero.
(Just look at the phase line.)
However, in the second case, even though the linearization is the same, it is not true that all solutions
converge to zero. For example, starting at a state x(0) > 1, solutions diverge to + as t .
(Again, this is clear from looking at the phase line.)
It is often confusing to students that from the fact that all solutions of dX/dt = AX converge to zero,
one concludes for the nonlinear system that all solutions converge to X.
where X
= X X,
The confusion is due simply to notations: we are really studying dX/dt = AX,
but we usually drop the hats when looking at the linear equation dX/dt = AX.
Regarding the eigenvalue test for linear systems, let us recall, informally, the basic ideas.
The general solution of dX/dt = AX, assuming7 distinct eigenvalues i for A, can be written as:
X(t) =
n
X
ci ei t vi
i=1
where for each i, Avi = i vi (an eigenvalue/eigenvector pair) and the ci are constants (that can be fit
to initial conditions).
It is not surprising that eigen-pairs appear: if X(t) = et v is solution, then et v = dX/dt = Aet v,
which implies (divide by et ) that Av = v.
6
Things get very technical and difficult if A has eigenvalues with exactly zero real part. The field of mathematics
called Center Manifold Theory studies that problem.
7
If there are repeated eigenvalues, one must fine-tune a bit: it is necessary to replace some terms ci ei t vi by ci t ei t vi
(or higher powers of t) and to consider generalized eigenvectors.
17
We also recall that everything works in the same way even if some eigenvalues are complex, though
it is more informative to express things in alternative real form (using Eulers formula).
To summarize:
Real eigenvalues correspond8 to terms in solutions that involve real exponentials et , which
can only approach zero as t + if < 0.
Non-real complex eigenvalues = a + ib are associated to oscillations. They correspond9 to
terms in solutions that involve complex exponentials et . Since one has the general formula
et = eat+ibt = eat (cos bt + i sin bt), solutions, when re-written in real-only form, contain terms
of the form eat cos bt and eat sin bt, and therefore converge to zero (with decaying oscillations
of period 2/b) provided that a < 0, that is to say, that the real part of is negative. Another
way
notice
that
asking that et 0 is the same as requiring that the magnitude
p
t
t to see this if to
e 0. Since e = eat (cos bt)2 + (sin bt)2 = eat , we see once again that a < 0 is the
condition needed in order to insure that et 0
Special Case: 2 by 2 Matrices
In the case n = 2, it is easy to check directly if dX/dt = AX is stable, without having to actually
compute the eigenvalues. Suppose that
a11 a12
A=
a21 a22
and remember that
trace A = a11 + a22 , det A = a11 a22 a12 a21 .
Then:
stability is equivalent to: trace A < 0 and det A > 0.
(Proof: the characteristic polynomial is 2 + b + c where c = det A and b = trace A. Both roots
have negative real part if
(complex case) b2 4c < 0 and b > 0
or
and the last condition is equivalent to b2 4c < b, i.e. b > 0 and b2 > b2 4c, i.e. b > 0 and c > 0.)
Moreover, solutions are oscillatory (complex eigenvalues) if (trace A)2 < 4 det A, and exponential
(real eigenvalues) otherwise. We come back to this later (trace/determinant plane).
(If you are interested: for higher dimensions (n>2), one can also check stability without computing
eigenvalues, although the conditions are more complicated; google Routh-Hurwitz Theorem.)
8
To be precise, if there are repeated eigenvalues, one may need to also consider terms of the slightly more complicated
form tk et but the reasoning is exactly the same in that case.
9
For complex repeated eigenvalues, one may need to consider terms tk et .
2.4
18
0
(1 1)
2) =
1
(1 1) + 1
A = F 0 (X
1
1
where we used the shorthand: = 2 (1 1) 1.
The trace of this matrix A is negative, and the determinant is positive, because:
1 1 > 0 and > 0
(1 1)
> 0.
1
1) =
A = F 0 (X
C
1 N
2
1
1 0
1
2
1+C
(1 + C)
2
= 1 +
2
C
N
1
1 + 2
1+C
(1 + C)2
N =0,C=2
2
1 + 2 1 2
1 + 2 (1 1 )
1
=
=
=
< 0
1 + 2
1 + 2
1 + 2
1 + 2
1 is unstable.
and therefore the steady state X
1 is a saddle: small perturbations, where N (0) > 0, will tend away from
It turns out that the point X
1 . (Intuitively, if even a small amount of bacteria is initially present, growth will occur. As it turns
X
1 is approached.)
out, the growth is so that the other equilibrium X
1 when the parameters are chosen such that
Additional homework problem: Analyze stability of X
2 does not exist.
the equilibrium X
19
3.1
inflow
- F
drug in blood
N (t), C(t)
V = volume of blood
F = Fin = Fout are the blood flows
N (t) = number of cells (assumed equal in mass)
outflow
- F
exposed to drug
C0 , C(t) = drug concentrations
organ
In drug infusion models, if a pump delivers the drug at a certain concentration,
the actual C0 would account for the dilution rate when injected into the blood.
We assume that things are well-mixed although more realistic models use the fact
that drugs may only affect e.g. the outside layers of a tumor.
The flow F represents blood brought into the organ through an artery, and the blood coming out.
The key differences with the chemostat are:
the cells in question reproduce at a rate that is, in principle, independent of the drug,
but the drug has a negative effect on the growth, a kill rate that we model by some function
K(C), and
the outflow contains only (unused) drug, and not any cells.
If we assume that cells reproduce exponentially and the drug is consumed at a rate proportional to the
kill rate K(C)N , we are led to:
dN
dt
dC
dt
= K(C)N + kN
= K(C)N
CF
C0 F
+
.
V
V
A homework problem asks you to analyze these equations, as well as a variation of the model,
in which the reproduction rate follows a different law (Gompertz law)
3.2
20
Compartmental Models
u1
u2
?
-F
12
F21
d1
d2
Compartmental models are very common in pharmacology and many other biochemical applications.
They are used to account for different behaviors in different tissues.
In the simplest case, there are two compartments, such as an organ and the blood in circulation.
We model the two-compartment case now (the general case is similar).
We use two variables x1 , x2 , for the concentrations (mass/vol) of a substance
(such as a drug, a hormone, a metabolite, a protein, or some other chemical) in each compartment,
and m1 , m2 for the respective masses.
The flow (vol/sec) from compartment i to compartment j is denoted by Fij .
When the substance happens to be in compartment i, a fraction di t of its mass, degrades, or is
consumed, in any small interval of time t,
Sometimes, there may may also be an external source of the substance, being externally injected; in
that case, we let ui denote the inflow (mass/sec) into compartment i.
On a small interval t, the increase (or decrease, if the number is negative) in the mass in the ith
compartment is:
mi (t + t) mi (t) = F12 x1 t + F21 x2 t d1 m1 t + u1 t .
(For example, the mass flowing in from compartment 1 to compartment 2 is computed as:
vol
mass
flow concentration in 1 time =
time .)
time
vol
Similarly, we have an equation of m2 . We divide by t and take limits as 0, leading to the
following system of two linear differential equations:
dm1
= F12 m1 /V1 + F21 m2 /V2 d1 m1 + u1
dt
dm2
=
F12 m1 /V1 F21 m2 /V2 d2 m2 + u2
dt
(we used that xi = mi /Vi ). So, for the concentrations xi = mi /Vi , we have:
dx1
F12
F21
u1
=
x1 +
x2 d1 x1 +
dt
V1
V1
V1
dx2
F12
F21
u2
=
x1
x2 d2 x2 +
dt
V2
V2
V2
A homework problem asks you to analyze an example of such a system.
21
4.1
One interprets dX
=F (X) as a flow in Rn : at each position X, F (X) is a vector that indicates in
dt
which direction to move (and its magnitude says at what speed).
4.2
Zooming into points that are not equilibria is not interesting; a theorem called the flow box theorem says (for a
that is not an equilibrium is quite
vector field defined by differentiable funcions) that the flow picture near a point X
boring as it consists essentially of a bundle of parallel lines.
11
The cases when one or both eigenvalues are zero, or are both nonzero but equal, can be also analyzed, but they are a
little more complicated.
22
Trace/Determinant Plane
We next compute the type of the local equilibria for the chemostat example,
2 is positive).
assuming that 1 > 1 and 2 (1 1) 1 > 0 (so X
Recall that the we had computed the Jacobian at the positive equilibrium X2 = 1 2
0
(1 1)
2) =
(1 1) + 1
1
A = F 0 (X
1
1
1
1 1
1
1 1
12
13
Centers are highly non-robust in a way that we will discuss later, so they rarely appear in realistic biological models.
If 6= 1; otherwise there are repeated real eigenvalues; we still have stability, but well ignore that very special case.
4.3
23
Nullclines
The intersections between the nullclines are the steady states. This is because each nullcline is the set
where dx1 /dt = 0, dx2 /dt = 0, . . ., so intersecting gives points at which all dxi /dt = 0, that is to say
F (X) = 0 which is the definition of steady states.
f (N, C)
As an example, let us take the chemostat, for which the vector field is F (X) =
, where:
g(N, C)
C
N N
1+C
C
N C + 2 .
g(N, C) =
1+C
f (N, C) = 1
C
1+C
N N = 0.
1
and N = 0 .
1 1
On this set, the arrows are vertical, because dN/dt = 0 (no movement in N direction).
C
The C-nullcline is obtained by setting 1+C
N C + 2 = 0.
We can describe a curve in any way we want; in this case, it is a little simpler to solve N = N (C)
than C = C(N ):
2
1+C
= 1 C +
+ 2 .
C
C
On this set, the arrows are parallel to the N -axis, because dC/dt = 0 (no movement in C direction).
the C-nullcline is the curve: N = (2 C)
To plot, note that N (2 ) = 0 and N (C) is a decreasing function of C and goes to + as C & 0,
and then obtain C = C(N ) by flipping along the main diagonal (dotted and dashed curves in the
graph, respectively). We show this construction and the nullclines look as follows:
Actually, linearization is sometimes not sufficient even for local analysis. Think of dx/dt = x3 and dx/dt = x3 ,
which have the same linearization (dx/dt = 0) but very different local pictures at zero. The area of mathematics called
Center manifold theory deals with such very special situations, where eigenvalues may be zero or more generally have
zero real part.
14
24
> 0 if C < 2
< 0 if C > 2
dC
C
N 1 + N 1 +
=
N C + 2 =
dt
1+C
1 (1 1)
so the arrow points up if N < 1 2
1
1 1
1
:
1 1
>
0
if
N
<
1 2
1 2
< 0 if N > 1 2
1
1 1
1
1 1
To decide whether the arrows point right or left (sign of dN/dt) on the C-nullcline, we look at:
1
> 0 if C >
dN
C
1 1
= N 1
1
1
dt
1+C
< 0 if C <
1 1
(since N 0, the sign of the expression is the same as the sign of 1
We have, therefore, this picture:
C
1+C
1).
What about the direction of the vector field elsewhere, not just on nullclines?
The key observation is that the only way that arrows can reverse direction is by crossing a nullcline.
For example, if dx1 /dt is positive at some point A, and it is negative at some other point B, then A and
B must be on opposite sides of the x1 nullcline. The reason is that, were we to trace a path between
A and B (any path, not necessarily a solution of the system), the derivative dx1 /dt at the points in
the path varies continuously15 and therefore (intermediate value theorem) there must be a point in this
path where dx1 /dt = 0.
15
25
In summary: if we look at regions demarcated by the nullclines16 then the orientations of arrows
remain the same in each such region.
For example, for the chemostat, we have 4 regions, as shown in the figure.
In region 1, dN/dt > 0 and dC/dt < 0, since these are the values in the boundaries of the region.
Therefore the flow is Southeast (&) in that region. Similarly for the other three regions.
We indicate this information in the phase plane:
Note that the arrows are just icons intended to indicate if the flow is
generally SE (dN/dt > 0 and dC/dt < 0), NE, etc, but the actual numerical slopes will vary
(for example, near the nullclines, the arrows must become either horizontal or vertical).
4.4
Global Behavior
2 converge to it (local
We already know that trajectories that start near the positive steady state X
stability)
1 go away from it (instability).
and that most trajectories that start near X
(Still assuming, obviously, that the parameters have been chosen in such a way that the positive steady
state exists.)
2
Let us now sketch a proof that, in fact, every trajectory converges to X
(with the exception only of those trajectories that start with N (0) = 0).
The practical consequences of this global attraction result are that,
2.
no matter what the initial conditions, the chemostat will settle into the steady state X
It is helpful to consider the following line:
N + 1 C 1 2 = 0
1
1 = (0, 2 ) and X
2 = 1 2 1
,
.
which passes through the points X
1 1
1 1
Note that (1 2 , 0) is also in this line.
The picture is as follows17 where the arrows are obtained from the flow direction, as shown earlier.
(L)
16
26
We claim that this line is invariant, that is, solutions that start in L must remain in L. Even more
interesting, all trajectories (except those that start with N (0) = 0) converge to L.
For any trajectory, consider the following function:
z(t) = N (t) + 1 C(t) 1 2
and observe that
C
z = N + 1 C = 1
N N 1
1+C
0
C
N C + 2
1+C
= z
which implies that z(t) = z(0)et . Therefore, z(t) = 0 for all t > 0, if z(0) = 0 (invariance), and in
general z(t) 0 as t + (solutions approach L).
Moreover, points in the line N + 1 C 1 2 = m are close to points in L if m is near zero.
1 and X
2 , the open segment from X
1
Since L is invariant and there are no steady states in L except X
2 is a trajectory that connects the unstable state X
1 to the stable state X
2 . Such a trajectory is
to X
18
called a heteroclinic connection.
Now, we know that all trajectories approach L, and cannot cross L (no trajectories can ever cross, by
uniqueness of solutions, as seen in your ODE class).
Suppose that a trajectory starts, and hence remains, on top of L (the argument is similar if remains
under L), and with N (0) > 0.
Since the trajectory gets closer and closer to L, and must stay in the first quadrant (why?), it will either
2 from the NW or it will eventually enter the region with the NW arrow at which
converge to X
2 . In summary, every trajectory converges.
point it must have turned and start moving towards X
18
1 and X
2 to see that L matches the linearized eigen-directions.
Exercise: check eigenvectors at X
27
The modeling of infectious diseases and their spread is an important part of mathematical biology,
part of the field of mathematical epidemiology.
Modeling is an important tool for gauging the impact of different vaccination programs on the control
or eradication of diseases.
We will only study here a simple ODE model, which does not take into account age structure nor
geographical distribution. More sophisticated models can be based on compartmental systems, with
compartments corresponding to different age groups, partial differential equations, where independent
variables specify location, and so on, but the simple ODE model already brings up many of the
fundamental ideas.
The classical work on epidemics dates back to Kermack and McKendrick, in 1927. We will study
their SIR and SIRS models without vital dynamics (births and deaths; see a homework problem
with a model with vital dynamics).
To explain the model, let us think of a flu epidemic, but the ideas are very general.
In the population, there will be a group of people who are Susceptible to being passed on the virus by
the Infected individuals.
At some point, the infected individuals get so sick that they have to stay home, and become part of
the Removed group. Once that they recover, they still cannot infect others, nor can they be infected
since they developed immunity.
The numbers of individuals in the three classes will be denoted by S, I, and R respectively, and hence
the name SIR model.
Depending on the time-scale of interest for analysis, one may also allow for the fact that individuals
in the Removed group may eventually return to the Susceptible population, which would happen if
immunity is only temporary. This is the SIRS model (the last S to indicate flow from R to S),
which we will study next.
We assume that these numbers are all functions of time t, and that the numbers can be modeled as
real numbers. (Non-integers make no sense for populations, but it is a mathematical convenience. Or,
if one studies probabilistic instead of deterministic models, these numbers represent expected values
of random variables, which can easily be non-integers.)
The basic modeling assumption is that the number of new infectives I(t+t)I(t) in a small interval
of time [t, t + t] is proportional to the product S(t)I(t) t.
Let us try to justify intuitively why it makes sense. (As usual, experimentation and fitting to data
should determine if this is a good assumption. In fact, alternative models have been proposed as
well.)
Suppose that transmission of the disease can happen only if a susceptible and infective are very close
to each other, for instance by direct contact, sneezing, etc.
We suppose that there is some region around a given susceptible individual, so that he can only get
infected if an infective enters that region:
28
We assume that, for each infective individual, there is a probability p = t that this infective will
happen to pass through this region in the time interval [t, t + t], where is some positive constant
that depends on the size of the region, how fast the infectives are moving, etc. (Think of the infective
traveling at a fixed speed: in twice the length of time, there is twice the chance that it will pass by this
region.) We take t 0, so also p 0.
The probability that this particular infective will not enter the region is 1 p, and, assuming independence, the probability than no infective enters is (1 p)I .
So the probability that some infective
2 comes close to our susceptible is, using a binomial expansion:
I
I
1 (1 p) 1 (1 pI + 2 p + . . .) pI since p 1.
Thus, we can say that a particular susceptible has a probability pI of being infected. Since there are
S of them, we may assume, if S is large, that the total number infected will be S pI.
We conclude that the number of new infections is:
I(t + t) I(t) = pSI = SI t
and dividing by t and taking limits, we have a term SI in
dI
,
dt
dS
.
dt
This is called a mass action kinetics assumption, and is also used when writing elementary chemical
reactions. In chemical reaction theory, one derives this mass action formula using collision theory
among particles (for instance, molecules), taking into account temperature (which affects how fast
particles are moving), shapes, etc.
We also have to model infectives being removed: it is reasonable to assume that a certain fraction of
them is removed per unit of time, giving terms I, for some constant .
Similarly, there are terms R for the flow of removeds back into the susceptible population.
The figure is a little misleading: this is not a compartmental system, in which the flow from S to I is
just proportional to S. For example, when I = 0, no one gets infected; hence the product term in the
equations:
dS
= SI
+ R
dt
dI
= SI I
dt
dR
=
I R
dt
(There are many variations possible; here are some. In a model with vital dynamics see homework
assignments, one also adds birth and death rates to this model. Another one: a vaccine is given to a
certain percentage of the susceptibles, at a given rate, causing the vaccinated individuals to become
removed. Yet another one: there is a type of mosquito that makes people infected.)
5.1
29
Analysis of Equations
Let N = S(t) + I(t) + R(t). Since dN/dt = 0, N is constant, the total size of the population.
Therefore, even though we are interested in a system of three equations, this conservation law allows
us to eliminate one equation, for example, using R = N S I.
We are led to the study of the following two dimensional system:
dS
= SI + (N S I)
dt
dI
= SI I
dt
I-nullcline: union of lines I = 0 and S = /.
S-nullcline: curve I =
(N S)
.
S+
(N )
,
!
,
I S
I
(N )
+
Homework problem: Suppose that = = = 1. For what values of N does one have stable spirals
2?
and for what values does one get stable nodes, for X
5.2
30
Interpreting
dI
dt
= I, so I(t) = P et .
Suppose that the ith individual is infected for a total of di days, and look at the following table:
cal. days
Individuals
Ind. 1
Ind. 2
Ind. 3
...
Ind. P
0
X
X
X
1
X
X
X
2
X
X
X
X
X
X
X
=I0
X
=I1
X
=I2
X
...
...
X
d1
X
= d1 days
= d2 days
= d3 days
= dP days
It is clear that d1 + d2 + . . . = I0 + I1 + I2 + . . .
(supposing that we count on integer days, or hours, or some other discrete time unit).
Therefore, the average number of days that individuals are infected is:
Z
Z
1
1 X
1 X
1
1 t
I(t) dt =
e dt = .
di =
Ii
P
P
P 0
P 0
On the other hand, back to the original model, what is the meaning of the term SI in dI/dt?
It means that I(t) I(0) S(0)I(0)t.
Therefore, if we start with I(0) infectives, and we look at an interval of time of length t = 1/,
which we agreed represents the average time of an infection, we end up with the following number of
new infectives:
(N I(0))I(0)/ N I(0)/
if I(0) N , which means that each individual, on the average, infected (N I(0)/)/I(0) = new
individuals.
We conclude, from this admittedly hand-waving argument19 , that represents the expected number
infected by a single individual (in epidemiology, the intrinsic reproductive rate of the disease).
19
among other things, wed need to know that is large, so that t is small
5.3
31
Nullcline Analysis
(2S)
S+1
we have
dI
(S 1)(2 S)
=
dt
S+1
and therefore arrows point down if S < 1, and up
if S (1, 2). This in turn allows us to know the
general orientation (NE, etc) of the vector field.
Here are computer-generated phase-planes20 for this example as well as for a modification in which
we took = 3 (so < 1).
In the first case, the system settles to the positive steady state, no matter where started,
as long as I(0) > 0.
In the second case, there is only one equilibrium, since the vertical component of the I-nullcline is at
S = 3/1 = 3, which does not intersect the other nullcline. The disease will disappear in this case.
5.4
Immunizations
The effect of immunizations is to reduce the threshold N needed for a disease to take hold.
In other words, for N small, the condition = N / > 1 will fail, and no positive steady state will
exist.
Vaccinations have the effect to permanently remove a certain proportion p of individuals from the
population, so that, in effect, N is replaced by pN . Vaccinating just p > 1 1 individuals gives
(1 p) < 1, and hence suffices to eradicate a disease!
20
5.5
32
A Variation: STDs
Suppose that we wish to study a virus that can only be passed on by heterosexual sex. Then we should
consider two separate populations, male and female. We use S to indicate the susceptible males and
S for the females, and similarly for I and R.
The equations analogous to the SIRS model are:
dS
dt
dI
dt
dR
dt
dS
dt
dI
dt
dR
dt
= SI
=
+ R
I
SI
I R
=
= S I
=
+ R
S I I
I R .
This model is a little difficult to study, but in many STDs (especially asymptomatic), there is no
removed class, but instead the infecteds get back into the susceptible population. This gives:
dS
dt
dI
dt
dS
dt
dI
dt
= SI
=
I
SI
= S I
=
+ I
+ I
S I I .
= S(t)
+ I(t)
and N = S(t) + I(t) for the total numbers of males and females, and using
Writing N
these two conservation laws, we can just study the following set of two ODEs:
dI
=
dt
dI
=
dt
N
I)I
I
(
(N I)I I .
Homework: Prove that there are two equilibria, I = I = 0 and, provided that
N
N
=
>1
also I =
( )/( )
NN
,
/+N
I =
( )/( )
NN
.
/+N
Furthermore, prove that the first equilibrium is unstable, and the second one stable.
What vaccination strategies could be used to eradicate the disease?
33
Chemical Kinetics
Elementary reactions (in a gas or liquid) are due to collisions of particles (molecules, atoms).
Particles move at a velocity that depends on temperature (higher temperature faster).
The law of mass action is:
reaction rates (at constant temperature) are proportional to products of concentrations.
This law may be justified intuitively in various ways, for instance, using an argument like the one that
we presented for disease transmission.
In chemistry, collision theory studies this question and justifies mass-action kinetics.
To be precise, it isnt enough for collisions to happen - the collisions have to happen in the right
way and with enough energy for bonds to break.
For example21 consider the following simple reaction involving a collision between two molecules:
ethene (CH2=CH2) and hydrogen chloride (HCl), which results om chloroethane.
As a result of the collision between the two molecules, the double bond between the two carbons is
converted into a single bond, a hydrogen atom gets attached to one of the carbons, and a chlorine atom
to the other.
But the reaction can only work if the hydrogen end of the H-Cl bond approaches the carbon-carbon
double bond; any other collision between the two molecules doesnt produce the product, since the
two simply bounce off each other.
The proportionality factor (the rate constant) in the law of mass action accounts for temperature,
probabilities of the right collision happening if the molecules are near each other, etc.
We will derive ordinary differential equations based on mass action kinetics. However, it is important
to remember several points:
If the medium is not well mixed then mass-action kinetics might not be valid.
If the number of molecules is small, a probabilistic model should be used. Mass-action ODE models
are only valid as averages when dealing with large numbers of particles in a small volume.
If a catalyst is required for a reaction to take place, then doubling the concentration of a reactants
does not mean that the reaction will proceed twice as fast.22 We later study some catalytic reactions.
21
6.1
34
Equations
We will use capital letters A, B, . . . for names of chemical substances (molecules, ions, etc), and
lower-case a, b, . . . for their corresponding concentrations.
There is a systematic way to write down equations for chemical reactions, using a graph description
of the reactions and formulas for the different kinetic terms. We discuss this systematic approach
later, but for now we consider some very simple reactions, for which we can write equations directly.
We simply use the mass-action principle for each separate reaction, and add up all the effects.
The simplest reaction is one where there is only one reactant, that can degrade23 or decay (as in
radioactive decay), or be transformed into another species, or split into several constituents.
In either case, the rate of the reaction is proportional to the concentration:
if we have twice the amount of substance X in a certain volume, then, per (small) unit of time, a
certain % of the substance in this volume will disappear, which means that the concentration will
diminish by that fraction.
A corresponding number of the new substances is then produced, per unit of time.
k
So, decay X gives the ODE:
dx/dt = kx ,
k
a transformation X Y gives:
dx/dt = kx
dy/dt = kx ,
k
and a dissociation reaction Z X + Y gives:
dx/dt = kz
dy/dt = kz
dz/dt = kz .
k+
A bimolecular reaction X + Y Z gives:
dx/dt = k+ xy
dy/dt = k+ xy
dz/dt = k+ xy
k
and if the reverse reaction Z X + Y also takes place:
x = k+ xy + k z
y = k+ xy + k z
z = k+ xy k z .
23
Of course, degrade is a relative concept, because the separate parts of the decaying substance should be taken
account of. However, if these parts are not active in any further reactions, one ignores them and simply thinks of the
reactant as disappearing!
35
Note the subscripts being used to distinguish between the forward and backward rate constants.
k+
k
Incidentally, another way to symbolize the two reactions X + Y Z and Z X + Y is as
follows:
k+
X + Y Z .
k
k
k0
Here is one last example: X + Y Z and Z X give:
dx/dt = kxy + k 0 z
dy/dt = kxy
dz/dt = kxy k 0 z .
(More examples are given in the homework problems.)
Conservation laws are often very useful in simplifying the study of chemical reactions.
For example, take the reversible bimolecular reaction that we just saw:
x = k+ xy + k z
y = k+ xy + k z
z = k+ xy k z .
Since, clearly, d(x + z)/dt 0 and d(y + z)/dt 0, then, for every solution, there are constants x0
and y0 such that x + z x0 and y + z y0 . Therefore, once that these constants are known, we only
need to study the following scalar first-order ODE:
z = k+ (x0 z)(y0 z) k z .
in order to understand the time-dependence of solutions. Once that z(t) is solved for, we can find x(t)
by the formula x(t) = x0 z(t) and y(t) by the formula y(t) = y0 z(t).
Well see an example of the use of conservation laws when modeling enzymatic reactions.
6.2
36
Chemical Networks
We next discuss a formalism that allows one to easily write up differential equations associated with
chemical reactions given by diagrams like
2H + O H2 O .
(6)
S2 = O,
S3 = H2 O ,
one going forward and one going backward. In general, a chemical reaction network (CRN, for
short) is a set of chemical reactions Ri , i {1, 2, . . . , nr }:
ns
X
Ri :
ij Sj
j=1
ns
X
ij Sj
(7)
j=1
where the ij and ij are some nonnegative integers, called the stoichiometry coefficients.
The species with nonzero coefficients on the left-hand side are usually referred to as the reactants, and
the ones on the right-hand side are called the products, of the respective reaction. (Zero coefficients are
not shown in diagrams.) The interpretation is that, in reaction 1, 11 molecules of species S1 combine
with 12 molecules of species S2 , etc., to produce 11 molecules of species S1 , 12 molecules of
species S2 , etc., and similarly for each of the other nr 1 reactions.
The forward arrow means that the transformation of reactants into products only happens in the direction of the arrow. For example, the reversible reaction (6) is represented by the following CRN, with
nr = 2 reactions:
R1 : 2H + O H2 O
R2 :
H2 O
2H + O .
12 = 1,
13 = 0,
11 = 0,
12 = 0,
13 = 1,
21 = 0,
22 = 0,
23 = 1,
21 = 2,
22 = 1,
23 = 0 .
and
It is convenient to arrange the stoichiometry coefficients into an ns nr matrix, called the stoichiometry matrix = ij , defined as follows:
ji = ij ij ,
(notice the reversal of indices).
i = 1, . . . , nr ,
j = 1, . . . , ns
(8)
37
The matrix has as many columns as there are reactions. Each column shows, for all species (ordered
according to their index i), the net producedconsumed. For example, for the reaction (6), is the
following matrix:
2 2
1 1 .
1 1
Notice that we allow degradation reactions like A 0 (all s are zero for this reaction).
Homework: Find the matrix for each of the reactions shown in Section 6.1 of the notes as well as
in the homework problems in the course website.
We now describe how the state of the network evolves over time, for a given CRN. We need to find a
rule for the evolution of the vector:
[S1 (t)]
[S2 (t)]
..
.
[Sns (t)]
where the notation [Si (t)] means the concentration of the species Si at time t. For simplicity, we drop
the brackets and write Si also for the concentration of Si (sometimes, to avoid confusion, we use
instead lower-case letters like si to denote concentrations). As usual with differential equations, we
also drop the argument t if it is clear from the context. Observe that only nonnegative concentrations
make physical sense (a zero concentration means that a species is not present at all).
The graphical information given by reaction diagrams is summarized by the matrix . Another ingredient that we require is a formula for the actual rate at which the individual reactions take place.
We denote by Ri (S) be algebraic form of the jth reaction. The most common assumption is that of
mass-action kinetics, where:
Ri (S) = ki
ns
Y
Sj ij for all i = 1, . . . , nr .
j=1
This says simply that the reaction rate is proportional to the products of concentrations of the reactants,
with higher exponents when more than one molecule is needed. The coefficients ki are reaction
constants which usually label the arrows in diagrams. Let us write the vector of reactions as R(S):
R1 (S)
R2 (S)
.
R(S) :=
..
.
Rnr (S)
With these conventions, the system of differential equations associated to the CRN is given as follows:
dS
= R(S) .
dt
(9)
38
Example
As an illustrative example, let us consider the following set of chemical reactions:
E+P
k1
k2
C
E + Q,
F +Q
k1
k3
k4
D
F + P,
k3
(10)
P
1 1
0
0
0
1
k1 EP
Q
0
k1 C
0
1 1 1
0
E
1 1
k2 C
1
0
0
0
.
S = , =
R(S) =
F
0
0
0
1
1
1
k
F
Q
3
C
1 1 1 0
k3 D
0
0
D
0
0
0
1 1 1
k4 D
From here, we can write the equations (9). For example,
dP
= (1)(k1 EP ) + (1)(k1 C) + (1)(k4 D) = k4 D k1 EP + k1 C .
dt
Conservation Laws
Let us consider the set of row vectors c such that c = 0. Any such vector is a conservation law,
because
d(cS)
dS
= c
= c R(S) = 0
dt
dt
for all t, in other words,
c S(t) = constant
along all solutions (a first integral of the motion). The set of such vectors forms a linear subspace
(of the vector space consisting of all row vectors of size ns ).
For instance, in the previous example, we have that, along all solutions, one has that
P (t) + Q(t) + C(t) + D(t) constant
because (1, 1, 0, 0, 1, 1) = 0. Similarly, we have two more linearly independent conservation laws,
namely (0, 0, 1, 0, 1, 0) and (0, 0, 0, 1, 0, 1), so also
E(t) + C(t) and F (t) + D(t)
39
are constant along trajectories. Since has rank 3 (easy to check) and has 6 rows, its left-nullspace
has dimension three. Thus, a basis of the set of conservation laws is given by the three that we have
found.
Homework. Find, for each of the problems in the notes and web-posted homework assignment, a
basis of conservation laws.
Optional homework. This one is a bit more complicated, but also very interesting. The example
covered before can be summarized as in Figure 1(a). Many cell signaling processes involve double
instead of single transformations such as addition of phosphate groups. A model for a double doublephosphorylation as in Figure 1(b) corresponds to reactions as follows (we use double arrows for
(11)
where ES0 represents the complex consisting of E bound to S0 and so forth. You should attach constants to all arrows and write up the system of ODEs. Show also that there is a basis of conservation
laws consisting of three vectors.
6.3
40
Catalysts facilitate reactions, converting substrates into products, while remaining basically unchanged.
Catalysts may act as pliers that place an appropriate stress to help break a bond,
they may bring substrates together, or they may help place a chemical group on a substrate.
c
Figure from Essential Cell Biology, Second Edition, published by Garland Science in 2004;
by
Alberts et al
Once activated, protein Y may then influence other cellular components, including other proteins,
acting itself as a kinase.
Normally, proteins do not stay activated forever; another type of enzyme, called a phosphatase, eventually takes away the phosphate group.
In this manner, signaling is turned off after a while, so that the system is ready to detect new signals.
Chemical and electrical signals from the outside of the cell are sensed by receptors.
Receptors are proteins that act as the cells sensors of outside conditions, relaying information to the
inside of the cell.
In some ways, receptors may be viewed as enzymes: the substrate is an extracellular ligand (a
molecule, usually small, outside the cell, for instance a hormone or a growth factor), and the product might be, for example, a small molecule (a second messenger) that is released in response to
the binding of ligand to the receptor. (Or, we may view a new conformation of the receptor as the
product of the reaction.)
This release, in turn, may trigger signaling through a series of chemical reactions inside the cell.
41
Cascades and feedbacks involving enzymatic (and other) reactions, as well as the action of proteins
on DNA (directing transcription of genes) are life.
Below we show one signaling pathway, extracted from a recent paper by Hananan and Weinberg
on cancer research. It describes the top-level schematics of the wiring diagram of the circuitry (in
mammalian cells) responsible for growth, differentiation, and apoptosis (commands which instruct
the cell to die). Highlighted in red are some of the genes known to be functionally altered in cancer
cells. Almost all the main species shown are proteins, acting many of them as enzymes in catalyzing
downstream reactions.
6.4
42
Differential Equations
S+E
k2
C
P +E
k1
and therefore the equations that relate the concentrations of substrate, (free) enzyme, complex (enzyme with substrate together), and product are:
ds
dt
de
dt
dc
dt
dp
dt
= k1 c k1 se
= (k1 + k2 )c k1 se
= k1 se (k1 + k2 )c
= k2 c
de
dt
dc
dt
(Often c(0) = 0 (no substrate), so that e0 = e(0), the initial concentration of free enzyme.)
So, we can eliminate e from the equations:
ds
= k1 c k1 s(e0 c)
dt
dc
= k1 s(e0 c) (k1 + k2 )c .
dt
We are down to two dimensions, and could proceed using the methods that we have been discussing.
However, Leonor Michaelis and Maud Leonora Menten formulated in 1913 an approach that allows
one to reduce the problem even further, by doing an approximation. Next, we review this approach,
43
as reformulated by Briggs and Haldane in 192524 , and interpret it in the more modern language of
singular perturbation theory.
Although a two-dimensional system is not hard to study, the reduction to one dimension is very useful:
When connecting many enzymatic reactions, one can make a similar reduction for each one of
the reactions, which provides a great overall reduction in complexity.
It is often not possible, or it is very hard, to measure the kinetic constants (k1 , etc), but it may be
easier to measure the parameters in the reduced model.
6.5
Let us write
ds
= k1 c k1 s(e0 c)
dt
h
i
dc
= k1 s(e0 c) (k1 + k2 )c = k1 s e0 (Km + s)c ,
dt
where Km =
k1 + k2
.
k1
The MM approximation amounts to setting dc/dt = 0. The biochemical justification is that, after a
transient period during which the free enzymes fill up, the amount complexed stays more or less
constant.
This allows us, by solving the algebraic equation:
s e0 (Km + s)c = 0
to express c in terms of s:
c =
s e0
.
Km + s
(12)
(13)
(14)
where we denote Vmax = k2 e0 . If we prefer to explicitly show the role of the enzyme as an input,
we can write these two equations as follows:
ds
k2 s
= e0
dt
Km + s
k2 s
dp
= e0
dt
Km + s
24
Michaelis and Menten originally made an the equilibrium approximation k1 c(t) k1 s(t)e(t) = 0 in which one
assumes that the first reaction is in equilibrium. This approximation is very hard to justify. The Briggs and Haldane
approach makes a different approximation. The final form of the production rate (see later) turns out to be algebraically
the same as in the original Michaelis and Menten work, but the parameters have different physical interpretations in terms
of the elementary reactions.
44
showing the rate at which substrate gets transformed into product with the help of the enzyme.
This is all very nice, and works out well in practice, but the mathematical justification is flaky: setting
0
dc/dt = 0 means that c is constant. But then, the equation c = Ksme+s
implies that s must be constant,
too. Therefore, also ds/dt = 0.
s
= ds/dt = 0, which means that s = 0. In other words, our derivation can only be
But then KVmax
m +s
right if there is no substrate, so no reaction is taking place at all!
One way to justify these derivations is as follows. Under appropriate conditions, s changes much
more slowly than c.
So, as far as c is concerned, we may assume that s(t) is constant, let us say s(t) = s.
Then, the equation for c becomes a linear equation, which converges to its steady state, which is given
by formula (12) (with s = s) obtained by setting dc/dt = 0.
Now, as s changes, c catches up very fast, so that this formula is always (approximately) valid.
From the point of view of s, the variable c is always catching up with its expression given by
formula (12), so, as far as its slow movement is concerned, s evolves according to formula (14).
(An exception is at the start of the whole process, when c(0) is initially far from its steady state value.
This is the boundary layer behavior.)
To make this more precise, we need to do a time scale analysis which studies the dynamics from cs
point of view (slow time scale) and ss (fast time scale) separately.
We now do all this analysis carefully.
6.6
s
,
s0
y=
c
,
e0
and write also = e0 /s0 , where we think of s0 as the initial concentration s(0) of substrate.
Note that x, y, are non-dimensional variables.
Using the new variables, the equations become:
dx
= [k1 y k1 s0 x (1 y)]
dt
dy
= k1 [s0 x (Km + s0 x)y] .
dt
Next, suppose that the initial concentration of enzyme e is small compared to that of substrate.
This means that the ratio is small25 .
Since 0, we make the approximation = 0 and substitute = 0 into these equations. So
dx/dt = 0, which means that x(t) equals a constant x, and hence the second equation becomes:
dc
= k1 [e0 s (Km + s)c]
dt
25
It would not make sense to just say that the amount of enzyme is small, since the meaning of small depends on
units. On the other hand, the ratio makes sense, assuming of course that we quantified concentrations of enzyme and
substrate in the same units. Typical values for may be in the range 102 to 107 .
45
(substituting s0 x = s and e0 y = c to express in terms of the original variables, and letting s = s0 x).
In this differential equation, c(t) converges as t to the steady state
c =
e0 s
Km + s
which is also obtained by setting dc/dt = 0 in the original equations if s(t) s is assumed constant.
In this way, we again obtain formula (13) for dp/dt (
s is the present value of s).
This procedure is called a quasi-steady state approximation (QSS), reflecting the fact that one res
places c by its steady state value Kem0+s
obtained by pretending that s would be constant. This is not
a true steady state of the original equations, of course.
The assumptions that went into our approximation were that 1 and, implicitly, that the time
interval that we considered wasnt too long (because, otherwise, dx/dt does change, even if 1).
One may argue that saying that the time interval is short is not consistent with the assumption that
c(t) has converged to steady state.
However, the constants appearing in the c equation are not small compared to : the speed of
convergence is determined by k1 (Km + s), which does not get small as 0.
So, for small enough , the argument makes sense (on any fixed time interval). In other words, the
approximation is justitied provided that the initial amount of enzyme is much smaller than the amount
of susbtrate.
(By comparison, notice that we did not have a way to know when our first derivation (merely setting
dc/dt = 0) was reasonable.)
One special case is that of small times t, in which case we may assume that s = s0 , and therefore the
equation for c is approximated by:
dc
= k1 [e0 s0 (Km + s0 )c] .
dt
(15)
One calls this the boundary layer equation, because it describes what happens near initial times
(boundary of the time interval).
Homework problem: Suppose that, instead of e0 s0 , we know only the weaker condition
e0 (s0 + Km ) .
Show that the same formula for product formation is obtained. Specifically, now pick:
x=
c
e0
s
, y= , =
s 0 + Km
e0
s 0 + Km
46
e0
k1 t .
s0
We may think of as a fast time scale, because = k1 t, and therefore is small for any given t.
For example, if k1 = 1/3600 and t is measured in seconds, then = 10 implies that t = 36000; thus,
= 10 means that ten hours have elapsed, while t = 10 means that only ten seconds elapsed.
Substituting s = s0 x, c = e0 y, and
dx
1 ds
=
,
d
e0 k1 dt
dy
s0 dc
= 2
,
d
e0 k1 dt
we have:
k1
dx
=
y s0 x (1 y)
d
k1
dy
= s0 x (Km + s0 x)y .
d
Still assuming that 1, we make an approximation by setting = 0 in the second equation:
dy
= s0 x (Km + s0 x)y
d
leading to the algebraic equation s0 x (Km + s0 x)y = 0 which we solve for y = y(x) =
or equivalently
e0 s
,
c=
Km + s
and finally we substitute into the first equation:
dx
k1
(k1 + Km k1 ) s0 x
k2 s0 x
=
y s0 x (1 y) =
=
d
k1
k1 (Km + s0 x)
k1 (Km + s0 x)
(recall that Km =
k1 +k2
).
k1
s0 x
,
Km +s0 x
(16)
47
ds
dx
In terms of the original variable s=s0 x, using
= e0 k1 , and recalling that Vmax = k2 e0 , we have
dt
d
re-derived (14):
Vmax s
ds
=
.
dt
Km + s
The important point to realize is that, after an initial convergence of c (or y) to its steady state, once
that c has locked into its steady state (16), it quickly catches up with any (slow!) changes in s,
and this catch-up is not visible at the time scale , so c appears to track the expression (16).
6.7
ds
dt
dc
dt
Since it is difficult to see the curves for small t, we show plots both for t [0, 25] and for t [0, 0.5]:
As expected, the blue curve approximates well for small t and the red one for larger t.
FYI, here is the Maple code that was used (for Tmax = 0.5 and 25):
48
6.8
49
The advantage of deriving things in this careful fashion is that we have a better understanding of what
went into the approximations. Even more importantly, there are methods in mathematics that help to
quantify the errors made in the approximation. The area of mathematics that deals with this type of
argument is singular perturbation theory.
The theory applies, in general, to equations like this:
dx
= f (x, y)
dt
dy
= g(x, y)
dt
with 0 < 1. The components of the vector x are called slow variables, and those of y fast
variables.
The terminology is easy to understand: dy/dt = (1/)(. . .) means that dy/dt is large, i.e., that y(t) is
fast, and by comparison x(t) is slow.26
The singular perturbation approach starts by setting = 0,
then solving (if possible) g(x, y) = 0 for y = h(x) (that is, g(x, h(x)) = 0),
and then substituting back into the first equation.
Thus, one studies the reduced system:
dx
= f (x, h(x))
dt
on the slow manifold defined by g(x, y) = 0.
There is a rich theory that allows one to mathematically justify the approximations.
A particularly useful point of view us that of geometric singular perturbation theory. We will not
cover any of that in this course, though.
26
The theory covers also multiple, not just two, time scales, as well partial differential equations where the domain is
subject to small deformations, and many other situations as well.
6.9
50
Inhibition
If the primary substrate cannot bind, no product (such as the release of signaling molecules by a
receptor) can be created.
For example, the enzyme may be a cell surface receptor, and the primary substrate might be a growth
factor, hormone, or histamine (a protein released by the immune system in response to pollen, dust, etc).
Competitive inhibition is one mechanism by which drugs act. For example, an inhibitor drug will
attempt to block the binding of the substrate to receptors in cells that can react to that substrate, such
as for example histamines to lung cells. Many antihistamines work in this fashion, e.g. Allegra.27
A simple chemical model is as follows:
k
S+E
k1
k2
C1
P +E
I +E
k3
C2
where C1 is the substrate/enzyme complex, C2 the inhibitor/enzyme complex, and I the inhibitor.
In terms of ODEs, we have:
ds
dt
de
dt
dc1
dt
dc2
dt
di
dt
dp
dt
= k1 c1 k1 se
= (k1 + k2 )c1 + k3 c2 k1 se k3 ie
= k1 se (k1 + k2 )c1
= k3 ie k3 c2
= k3 c2 k3 ie
= k2 c 1 .
It is easy to see that c1 + c2 + e is constant (it represents the total amount of free or bound enzyme,
which well denote as e0 ), and similarly i + c2 = i0 is constant (total amount of inhibitor, free or
bound to enzyme). This allows us to eliminate e and i from the equations. Furthermore, as before, we
27
In pharmacology, an agonist is a ligand which, when bound to a receptor, triggers a cellular response. An antagonist
is a competitive inhibitor of an agonist. when we view the receptor as an enzyme and the agonist as a substrate.
51
may first ignore the equation for p. We are left with a set of three ODEs:
ds
= k1 c1 k1 s(e0 c1 c2 )
dt
dc1
= k1 s(e0 c1 c2 ) (k1 + k2 )c1
dt
dc2
= k3 (i0 c2 )(e0 c1 c2 ) k3 c2
dt
One may now do a quasi-steady-state approximation, assuming that the enzyme concentrations are
small relative to substrate.
We omit the steps; essentially, we need to nondimensionalize as earlier, set an appropriate small to
zero, etc.
Formally, we can just set dc1 /dt = 0 and dc2 /dt = 0. Doing so gives:
Ki e0 s
k1 + k2
c1 =
Km =
Km i + Ki s + Km Ki
k1
Km e0 i
k3
c2 =
Ki =
Km i + Ki s + Km Ki
k3
(not eliminating i).
The product formation rate is dp/dt = k2 c1 , so, again with Vmax = k2 e0 , one has the approximate
formula:
dp
Vmax s
=
dt
s + Km (1 + i/Ki )
The formula reduces to the previous one when there is no inhibition (i = 0).
We see that the rate of product formation is smaller than if there had been no inhibition, given the
same amount of substrate s(t) (at least if i1, k3 1, k3 1).
But for s very large, the rate saturates at p = Vmax , just as if there was no inhibitor (intuitively, there is
so much s that i doesnt get chance to bind and block).
6.10
Allosteric Inhibition
In allosteric inhibition28 , an inhibitor does not bind in the same place where the catalytic activity
occurs, but instead binds at a different effector site (other names are regulatory or allosteric site),
with the result that the shape of the enzyme is modified. In the new shape, it is harder for the enzyme
to bind to the substrate.
28
Merriam-Webster: allosteric: all+steric; and steric means relating to or involving the arrangement of atoms in
space and originates with the word solid in Greek
52
A slightly different situation is if binding of substrate can always occur, but product can only be
formed (and released) if I is not bound. We model this last situation, which is a little simpler.
Also, for simplicity, we will assume that binding of S or I to E are independent of each other.
(If we dont assume this, the equations are still the same, but we need to introduce some more kinetic
constants ks.)
A reasonable chemical model is, then:
k
E+S
k2
ES
P +E
k1
k
EI + S
EIS
k1
k
E+I
EI
k3
k
ES + I
EIS
k3
where EI denotes the complex of enzyme and inhibitor, etc.
It is possible to prove (see e.g. Keener-Sneyds Math Physiology, exercise 1.5) that there results under
quasi-steady state approximation a rate
dp
Vmax
s2 + as + b
=
2
dt
1 + i/Ki s + cx + d
for some suitable numbers a = a(i), . . . and a suitably defined Ki .
Notice that the maximal possible rate, for large s, is lower than in the case of competitive inhibition.
One intuition is that, no matter what is the amount of substrate, the inhibitor can still bind, so maximal
throughput is affected.
6.11
53
Cooperativity
Lets take a situation where n molecules of substrate must first get together with the enzyme in order
for the reaction to take place:
k1
k2
nS + E C
P +E
k1
This is not a very realistic model, since it is unlikely that n + 1 molecules may meet simultaneously.
It is, nonetheless, a simplification of a more realistic model in which the bindings may occur in
sequence.
One says that the cooperativity degree of the reaction is n, because n molecules of S must be present
for the reaction to take place.
Highly cooperative reactions are extremely common in biology, for instance, in ligand binding to cell
surface receptors, or in binding of transcription factors to DNA to control gene expression.
We only look at this simple model in this course. We have these equations:
ds
dt
de
dt
dc
dt
dp
dt
= nk1 c nk1 sn e
= (k1 + k2 )c k1 sn e
= k1 sn e (k1 + k2 )c
= k2 c
Doing a quasi-steady state approximation, under the assumption that enzyme concentration is small
compared to substrate, we may repeat the previous steps (do it as a homework problem!), which lead
to the same expression as earlier for product formation, except for a different exponent:
Vmax sn
dp
=
dt
Km + s n
The integer n is called the Hill coefficient.
One may determine Vmax , n, and Km experimentally, from knowledge of the rate of product formation
p = dp/dt as a function of current substrate concentration (under the quasi-steady state approximation
assumption).
First, Vmax may be estimated from the rate p corresponding to s . This allows the computation of
the quantity Vmaxp p . Then, one observes that the following equality holds (solve for sn and take logs):
p
n ln s = ln Km + ln
.
Vmax p
Thus, by a linear regression of ln Vmaxp p versus ln s, and looking at slope and intersects, n and Km
can be estimated.
Since the cooperative mechanism may include many unknown and complicated reactions, including
very complicated allosteric effects, it is not uncommon for fractional powers to be appear (even if the
above model makes no sense in a fractional situation) when fitting parameters.
54
dp
dt
Vmax sn
n +sn .
Km
This has the advantage that, just as earlier, Km has an interpretation as the value of substrate s for
which the rate of formation of product is half of Vmax .
For our subsequent studies, the main fact that we observe is that, for n > 1, one obtains a sigmoidal
shape for the formation rate, instead of a hyperbolic shape.
This is because, if f (s) =
Vmax sn
n +sn ,
Km
In other words, for n > 1, and as the function is clearly increasing, the graph must start with
concavity-up. But, since the function is bounded, the concavity must change to negative at some
point.
Here are graphs of two formation rates, one with n = 1 (hyperbolic) and one with n = 3 (sigmoidal):
Cooperativity plays a central role in allowing for multi-stable systems, memory, and development, as
well see soon.
Here is a more or less random example from the literature29 which shows fits of Vmax and n (nH for
Hill) to various data sets corresponding to an allosteric reaction.
(Since you asked: the paper has to do with an intracellular reaction having to do with the incorporation
of inorganic sulfate into organic molecules by sulfate assimilating organisms; the allosteric effector is
PAPS, 3-phosphoadenosine-5-phosphosulfate.)
29
Ian J. MacRae et al., Induction of positive cooperativity by amino acid replacements within the C-terminal domain
of Penicillium chrysogenum ATP sulfurylase, J. Biol. Chem., Vol. 275, 36303-36310, 2000
7
7.1
55
Multi-Stability
Hyperbolic and Sigmoidal Responses
Let us now look at the enzyme model again, but this time assuming that the substrate is not being
depleted.
This is not as strange a notion as it may seem.
For example, in receptor models, the substrate is ligand, and the product is a different chemical
(such as a second messenger released inside the cell when binding occurs), so the substrate is not
really consumed.
Or, substrate may be replenished and kept at a certain level by another mechanism.
Or, the change in substrate may be so slow that we may assume that its concentration remains constant.
In this case, instead of writing
k
S+E
k2
C
P +E,
k1
it makes more sense to write
E
k
1s
k2
C
P +E.
k1
The equations are as before:
de
= (k1 + k2 )c k1 se
dt
dc
= k1 se (k1 + k2 )c
dt
dp
= k2 c
dt
except for the fact that we view s as a constant.
Repeating exactly all the previous steps, a quasi-steady state approximation leads us to the product
formation rate:
Vmax sn
dp
=
dt
Kmn + sn
with Hill coefficient n = 1, or n > 1 if the reaction is cooperative.
Next, let us make things more interesting by adding a degradation term p.
In other words, we suppose that product is being produced, but it is also being used up or degraded,
at some linear rate p, where is some positive constant.
We obtain the following equation:
dp
Vmax sn
=
p
dt
Kmn + sn
for p(t).
As far as p is concerned, this looks like an equation
dp
dt
56
Let us take = 1 just to make notations easier.30 Then the steady state obtained for p is:
p() =
Vmax sn
Kmn + sn
By analogy, if s would be the displacement of a slider or dial, a light-dimmer behaves in this way:
the steady-state as a function of the input concentration s (which we are assuming is some constant)
is graded, in the sense that it is proportional to the parameter s (over a large range of values s;
eventually, it saturates).
The case n = 1 gives what is called a hyperbolic response, in contrast to sigmoidal response that
arises from cooperativity (n > 1).
As n gets larger, the plot of
Vmax sn
n +sn
Km
The sharp increase, and saturation, means that a value of s which is under some threshold (roughly,
s < Km ) will not result in an appreciable result (p 0, in steady state) while a value that is over this
threshold will give an abrupt change in result (p Vmax , in steady state).
A binary response is thus produced from cooperative reactions.
The behavior of closer to that of a doorbell: if we dont press hard enough, nothing happens;
if we press with the right amount of force (or more), the bell rings.
Ultrasensitivity
Sigmoidal responses are characteristic of many signaling cascades, which display what biologists call
an ultrasensitive response to inputs. If the purpose of a signaling pathway is to decide whether a gene
30
57
should be transcribed or not, depending on some external signal sensed by a cell, for instance the
concentration of a ligand as compared to some default value, such a binary response is required.
Cascades of enzymatic reactions can be made to display ultrasensitive response, as long as at each
step there is a Hill coefficient n > 1, since the derivative of a composition of functions f1 f2 . . . fk
is, by the chain rule, a product of derivatives of the functions making up the composition.
Thus, the slopes get multiplied, and a steeper nonlinearity is produced. In this manner, a high effective
cooperativity index may in reality represent the result of composing several reactions, perhaps taking
place at a faster time scale, each of which has only a mildly nonlinear behavior.
7.2
Observe that, for small p, the formation rate is larger than the degradation rate,
while, for large p, the degradation rate exceeds the formation rate.
Thus, the concentration p(t) converges to a unique intermediate value.
31
If we wanted to give a careful mathematical argument, wed need to do a time-scale separation argument in detail.
We will proceed very informally.
32
Actually, we can always rescale p and t and rename parameters so that we have this simpler situation, anyway.
58
for small p the degradation rate is larger than the formation rate, so the concentration p(t) converges
to a low value,
but for large p the formation rate is larger than the degradation rate, and so the concentration p(t)
converges to a high value instead.
In summary, two stable states are created, one low and one high, by this interaction of formation
and degradation, if one of the two terms is sigmoidal.
(There is also an intermediate, unstable state.)
Instead of graphing the formation rate and degradation rate separately, one may (and we will, from
now on) graph the right hand side
Vmax pn
p
Kmn + pn
as a function of p. From this, the phase line can be read-out, as done in your ODE course.
For example, here is the graph of
Vmax pn
p
Kmn + pn
7.3
In unicellular organisms, cell division results in cells that are identical to each other and to the original
(mother) cell. In multicellular organisms, in contrast, cells differentiate.
59
Since all cells in the same organism are genetically identical, the differences among cells must result
from variations of gene expression.
A central question in developmental biology is: how are these variations established and maintained?
A possible mechanism by which spatial patterns of cell differentiation could be specified during embryonic development and regeneration is based on positional information.33 Cells acquire a positional
value with respect to boundaries, and then use this coordinate system information during gene expression, to determine their fate and phenotype.
(Daughter cells inherit as initial conditions the gene expression pattern of the mother cells, so that
a developmental history is maintained.)
In other words, the basic premise is that position in the embryo determines cell fate.
But how could this position be estimated by each individual cell?
One explanation is that there are chemicals, called morphogens, which are nonuniformly distributed.
Typically, morphogens are RNA or proteins.
They instruct cells to express certain genes, depending on position-dependent concentrations (and
slopes of concentrations, i.e. gradients).
When different cells express different genes, the cells develop into distinct parts of the organism.
An important concept is that of polarity: opposite ends of a whole organism or of a given tissue
(or sometimes, of a single cell) are different, and this difference is due to morphogen concentration
differences.
Polarity is initially determined in the embryo.
It may be established initially by the site of sperm penetration, as well as environmental factors such
as gravity or pH.
The existence of morphogens and their role in development were for a long time just an elegant
mathematical theory, but recent work in developmental biology has succeeded in demonstrating that
embryos do in fact use morphogen gradients. This has been shown for many different species, although most of the work is done in fruit flies. A nice expository article (focusing on frogs) is: Jeremy
Green, Morphogen gradients, positional information, and Xenopus: Interplay of theory and experiment, Developmental Dynamics, 2002, 225: 392-408. There is a link for this paper in the course
webpage:
http://www.math.rutgers.edu/ sontag/336/morphogen gradients exposition green dev dynamics02.pdf
The idea of positional information is an old one in biology, but it was Louis Wolpert in 1971 who formalized it, see:
Lewis, J., J.M. Slack, and L. Wolpert, Thresholds in development, J. Theor. Biol. 1977, 65: 579-590.
A good, non-mathematical, review article is One hundred years of positional information by Louis Wolpert, appeared
in Trends in Genetics, 1996, 12:359-64.
This last paper is posted to the course website:
60
How can small differences in morphogen lead to abrupt changes in cell fate?
For simplicity, let us think of a wormy one-dimensional organism, but the same ideas apply to a full
3-d model.
signal highest here
-
signal lower
cell # 1
cell # k
cell # 2
cell # N
We suppose that each cell may express a protein P whose level (concentration, if you wish) p
determines a certain phenotypical (i.e., observable) characteristic.
As a purely hypothetical and artificial example, it may be the case that P can attain two very distinct
levels of expression: very low (or zero) or very high, and that a cell will look like a nose cell if
p is high, and like a mouth cell if p is low.34
Moreover, we suppose that a certain morphogen S (we use S for signal) affects the expression
mechanism for the gene for P , so that the concentration s of S in the vicinity of a particular cell
influences what will happen to that particular cell.
The concentration of the signaling molecule S is supposed to be highest at the left end, and lowest at
the right end, of the organism, and it varies continuously. (This may be due to the mother depositing
S at one end of the egg, and S diffusing to the other end, for example.)
The main issue to understand is: since nearby cells detect only slightly different concentrations of S,
how can sudden changes of level of P occur?
s=1
p1
nose cell
s = 0.9
p1
nose cell
s = 0.8
p1
nose cell
s = 0.7
p1
nose cell
s = 0.6
p1
nose cell
s = 0.5
s = 0.4
s = 0.3 s = 0.2
p0
p0
p0
p0
mouth cell mouth cell mouth cell mouth cell
In other words, why dont we find, in-between cells that are part of the nose (high p) and cells that
are part of the mouth (low p), cells that are, say, 3/4 nose, 1/4 mouth?
We want to understand how this thresholding effect could arise.
The fact that the DNA in all cells of an organism is, in principle, identical, is translated mathematically
into the statement that all cells are described by the same system of equations, but we include an input
parameter in these equations to represent the concentration s of the morphogen near any given cell.35
In other words, well think of the evolution on time of chemicals (such as the concentration of the
protein P ) as given by a differential equation:
dp
= f (p, s)
dt
(of course, realistic models contain many proteins or other substances, interacting with each other
through mechanisms such as control of gene expression and signaling; we use an unrealistic single
equation just to illustrate the basic principle).
34
Of course, a real nose has different types of cells in it, but for this silly example, well just suppose that they all look
the same, but they look very different from mouth-like cells, which we also assume all look the same.
35
We assume, for simplicity, that s constant for each cell, or maybe the cell samples the average value of s around the
cell.
61
We assume that from each given initial condition p(0), the solution p(t) will settle to some steady
state p(); the value p() describes what the level of P will be after a transient period.
We think of p() as determining whether we have a nose-cell or a mouth-cell.
Of course, p() depends on the initial state p(0) as well as on the value of the parameter s that the
particular cell measures.
We will assume that, at the start of the process, all cells are in the same initial state p(0).
So, we need that p() be drastically different only due to a change in the parameter s.36
To design a realistic f , we start with the positive feedback system that we had earlier used to
illustrate bi-stability, and we add a term +ks as the simplest possible mechanism by which the
concentration of signaling molecule may influence the system.37 :
dp
Vmax pn
= f (p, s) =
p + ks .
dt
Kmn + pn
Let us take, to be concrete, k=5, Vmax =15, =7, Km =1, Hill coefficient n=2, and =1.
There follow the plots of f (p, s) versus p, for three values of s:
s < s ,
s = s ,
s > s ,
where s .268 .
The respective phase lines are now shown below the graphs:
We see that for s < s , there are two sinks (stable steady states), marked A and C respectively, as
well as a source (unstable steady state), marked B.
We think of A as the steady state protein concentration p() representing mouth-like cells, and C as
that for nose-like cells.
Of course, the exact position of A depends on the precise value of s. Increasing s by a small amount
means that the plot moves up a little, which means that A moves slightly to the right. Similarly, B
moves to the left and C to the right.
However, we may still think of a low and a high stable steady state (and an intermediate unstable state) in a qualitative sense.
Note that B, being an unstable state, will never be found in practice: the smallest perturbation makes
the solution flow away from it.
36
This is the phenomenon of bifurcations, which you should have encountered in the previous differential equations
course.
37
This term could represent the role of s as a transcription factor for p. The model that we are considering is the one
proposed in the original paper by Lewis et al.
62
For s > s , there is only one steady state, which is stable. We denote this state as C, because it
corresponds to a high concentration level of P .
Once again, the precise value of C depends on the precise value of s, but it is still true that C represents
a high concentration.
Incidentally, a value of s exactly equal to s will never be sensed by a cell: there is zero probability to
have this precise value.
Now, assume that all cells in the organism start with no protein, that is, p(0) = 0.
The left-most cells, having s > s , will settle into the high state C, i.e., they will become nose-like.
The right-most cells, having s < s , will settle into the low state A, i.e., they will become mouthlike.
So we see how a sharp transition between cell types is achieved, merely due to a change from s > s
to s < s as we consider cells from the left to the right end of the organism.
s > s
pC
nose cell
s > s
pC
nose cell
s > s
pC
nose cell
s > s
pC
nose cell
s > s
pC
nose cell
s < s
s < s
s < s
s < s
pA
pA
pA
pA
mouth cell mouth cell mouth cell mouth cell
Moreover, this model has a most amazing feature, which corresponds to the fact that, once a cells
fate is determined, it will not revert38 to the original state.
Indeed, suppose that, after a cell has settled to its steady state (high or low), we now suddenly washout the morphogen, i.e., we set s to a very low value.
The behavior of every cell will now be determined by the phase line for low s:
This means that any cell starting with low protein P will stay low, and any cell starting with high
protein P will stay high.
A permanent memory of the morphogen effect is thus imprinted in the system, even after the signal is
turned-off!
Optional homework: Show that a Hill coefficient n = 1 would not have worked:
Vmax p
dp
= f (p, s) =
p + ks
dt
Km + p
has the property that there is only one steady state, which depends continuously on the signal s.
38
63
higher a
cell # 1
cell # 2
cell # k
cell # N
The following plots show the graph of f (x) + a, for small, intermediate, and large a respectively.
We indicate a roughly low level of x by the letter A, an intermediate level by B, and a high
level by C.
Question: Suppose that the level of expression starts at x(0) = 0 for every cell.
(1) What pattern do we see after things settle to steady state?
(2) Next suppose that, after the system has so settled, we now suddenly change the level of the signal
a so that now every cell sees the same value of a. This value of a that every cell is exposed to,
corresponds to this plot of f (x) + a:
64
Answer:
Let us use this picture:
left cell
left cell
left cell
center cell center cell center cell right cell right cell
right cell
Those cells located toward the left will see these instructions of what speed to move at:
Therefore, starting from x = 0, they settle at a low gene expression level, roughly indicated by A.
Cells around the center will see these instructions:
Now, cells that started (from the previous stage of our experiment) near A will approach A, cells that
were near B approach B, and cells that were near C have their floor removed from under them so
to speak, and they are being now told to move left, i.e. all the way down to B.
In summary, we have that starting at x = 0 at time zero, the pattern observed after the first part of the
experiment is:
AAABBBCCC ,
and after the second part of the experiment we obtain this final configuration:
AAABBBBBB .
(Exactly how many As and Bs depends on the precise form of the function f , etc. We are just
representing the general pattern.)
The next page has a homework problem.
The answer is posted to the course website.
65
Homework Problem:
We consider a 1-d organism, with cells are arranged on a line.
Each cell expresses a certain gene X according to the same differential equation
dx
= f (x) + a
dt
but the cells toward the left end receive a low signal a 0, while those toward the right end see a
high signal a (and the signal changes continuously in between).
The level of expression starts at x(0) = 0 for every cell.
This is what f + a looks like, for low, intermediate, and high values of a respectively:
66
Periodic Behavior
Periodic behaviors (i.e, oscillations) are very important in biology, appearing in diverse areas such as
neural signaling, circadian rythms, and heart beats.
You have seen examples of periodic behavior in the differential equations course, most probably
the harmonic oscillator (mass spring system with no damping)
dx
= y
dt
dy
= x
dt
whose trajectories are circles, or, more generally, linear systems with eigenvalues that are purely
imaginary, leading to ellipsoidal trajectories:
A serious limitation of such linear oscillators is that they are not robust:
Suppose that there is a small perturbation in the equations:
dx
= y
dt
dy
= x + y
dt
where 6= 0 is small. The trajectories are not periodic anymore!
Now dy/dt doesnt balance dx/dt just right, so the trajectory doesnt close on itself:
To put it in different terms, the particular oscillation depends on the initial conditions. Biological
objects, in contrast, tend to reset themselves (e.g., your internal clock adjusting after jetlag).
39
the jump is not described by the differential equation; think of the effect of some external disturbance that gives a
kick to the system
8.1
67
A (stable) limit cycle is a periodic trajectory which attracts other solutions (at least those starting
nearby) to it.40
Thus, a member of a family of parallel periodic solutions (as for linear centers) is not called a limit
cycle, because other close-by trajectories remain at a fixed distance away, and do not converge to it.
Limit cyles are robust in ways that linear periodic solutions are not:
If a (small) perturbation moves the state to a different initial state away from the cycle, the system
will return to the cycle by itself.
If the dynamics changes a little, a limit cycle will still exist, close to the original one.
The first property is obvious from the definition of limit cycle. The second property is not very
difficult to prove either, using a Lyapunov function argument. (Ill explain the idea in class.)
8.2
In order to understand the definition, and to have an example that we can use for various purposes
later, we will consider the following system41 :
x 1 = x1 x2 + x1 (x21 + x22 )
x 2 = x1 + x2 + x2 (x21 + x22 ) .
where we pick = 1 for definiteness, so that the system is:
x 1 = x1 x2 x1 (x21 + x22 )
x 2 = x1 + x2 x2 (x21 + x22 ) .
(Note that if picked = 0, we would have a linear harmonic oscillator, which has no limit cycles.)
There are two other ways to write this system which help us understand it better.
The first is to use polar coordinates.
We let x1 = cos and x2 = sin , and differentiate with respect to time. Equating terms, we
obtain separate equations for the magnitude and the argument , as follows:
= ( 2 )
= .
(The equation in polar coordinates is only valid for x 6= 0, that is, if 6= 0 and is well-defined.)
40
41
Stable limit cycles are to all periodic trajectories as stable steady states are to all steady states.
of course, this is a purely mathematical example
68
Another useful way to rewrite the system is in terms of complex numbers: we represent the pair
(x1 , x2 ) by the complex number z = x1 + ix2 . Then the equation becomes, for z = z(t):
z = ( + i)z |z|2 z .
Prove that this equation is true, as a homework problem (use that dz/dt = dx1 /dt + idx2 /dt).
We now analyze the system using polar coordinates.
Since the differential equations for and are decoupled, we may analyze each of them separately.
The -equation = tells us that the solutions must be rotating at speed (counter-clockwise, if
> 0).
Let us look next at the scalar differential equation = ( 2 ) for the magnitude r.
When 0, the origin is the only steady state, and every solution converges to zero. This means that
the full planar system is so that all trajectories spiral into the origin.
When > 0, the origin of the scalar differential equation = ( 2 ) becomes unstable42 , as we
can see from the phase line. In fact, the the velocity is negative for > and positive for < ,
so that there is a sink at = . This means that the full planar system is so that all trajectories
spiral into the circle of radius , which is, therefore, a limit cycle.
8.3
Poincare-Bendixson Theorem
the passage from < 0 to > 0 is a typical example of what is called a supercritical Hopf bifurcation
69
This theorem is proved in advanced differential equations books; the basic idea is easy to understand:
if we start near the boundary, we must go towards the inside, and cannot cross back (because trajectories cannot cross). Since it cannot approach a source, the trajectory must approach a periodic orbit.
(Ill explain the idea in class.)
We gave a simple version, sufficient for our purposes; one can state the theorem a little more generally,
saying that all trajectories will converge to either steady states, limit cycles, or connections among
steady states. One such version is as follows: if the omega-limit set (x)43 of a trajectory is compact,
connected, contains only finitely many equilibria, then these are the only possibilities for (x):
(x) is a steady state, or
(x) is a periodic orbit, or
(x) is a homoclinic or heteroclinic connection.
It is also possible to prove that if there is a unique periodic orbit, then it must be a limit cycle.
In general, finding an appropriate region D is usually quite hard; often one uses plots of solutions
and/or nullclines in order to guess a region.44
Invariance of a region D can be checked by using the following test: the outward-pointing normal
vectors, at any point of the boundary of D, must make an angle of at least 90 degrees with the vector
field at that point. Algebraically, this means that the dot product must be 0 between a normal ~n and
the vector field:
dx dt
,
~n 0
dt dt
at any boundary point.45
is a limit cycle,
This is the set of limit points of the solution starting from an initial condition x
In problems, I might give you a differential equation and a region, and ask you to prove that it is a trapping region.
45
If the dot product is strictly negative, this is fairly obvious, since the vector field must then point to the inside of
D. When the vectors are exactly perpendicular, the situation is a little more subtle, especially if there are corners in the
boundary of D (what is a normal at a corner?), but the equivalence is still true. The mathematical field of nonsmooth
analysis studies such problems of invariance, especially for regions with possible corners.
44
70
We must find a suitable invariant region, one that contains the periodic orbit that we want to show
exists. Cheating (because if we
already know it is there, we dont need to find it!), we take as our
region D the disk with radius 2. (Any large enough disk would have done the trick.)
To show that D is a trapping region, we must look at its boundary, which is the circle of radius 2,
and show that the normal vectors, at any point of the boundary, form an angle of at least 90 degrees
with the vector field at that point. This is exactly the same as showing that the dot product between
the normal and the vector field is negative (or zero, if tangent).
At any point on the circle x21 + x22 = 2, a normal vector is (x1 , x2 ) (since the arrow from the origin
to the point is perpendicular to circle), and the dot product is:
[x1 x2 x1 (x21 +x22 )] x1 +[x1 +x2 x2 (x21 +x22 )] x2 = ((x21 +x22 ))(x21 +x22 ) = 22 < 0 .
Thus, the vector field points inside and the disk of radius 2 is a trapping region.
The only steady state is (0, 0), which we can see by noticing that if x1 x2 x1 (x21 + x22 ) = 0
and x1 + x2 x2 (x21 + x22 ) = 0 then multiplying by x1 the first equation, and the second by x2 , we
obtain that ( + x21 + x22 )(x21 + x22 ) = 0, so x1 = x2 = 0.
Linearizing at the origin, we have an unstable spiral. (Homework: check!) Thus, the only steady state
is repelling, which is the other property that we needed. So, we can apply the P-B Theorem.
We conclude that there is a periodic orbit inside this disk.46
8.4
A typical way in which periodic orbits arise in models in biology and many other fields can be illustrated with the well-known Van der Pol oscillator.47 After some changes of variables, which we do
not discuss here, the van der Pol oscillator becomes this system:
dx
x3
= y+x
dt
3
dy
= x
dt
The only steady state is at (0, 0), which repels, since the Jacobian has positive determinant and trace:
1 x2 1
1 1
=
.
1
0 (0,0)
1 0
In fact, using annular regions < x21 + x22 < + , one can prove by a similar argument that the periodic
orbit is unique, and, therefore, is a limit cycle.
47
Balthazar van der Pol was a Dutch electrical engineer, whose oscillator models of vacuum tubes are a routine example
in the theory of limit cycles; his work was motivated by models of the human heart and an interest in arrhythmias. The
original paper was: B. van der Pol and J. van der Mark, The heartbeat considered as a relaxation oscillation, and an
electrical model of the heart, Phil. Mag. Suppl. #6 (1928), pp. 763775.
46
71
We will show that there are periodic orbits (one can also show there is a limit cycle, but we will not
do so), by applying Poincare-Bendixson.
To apply P-B, we consider the following special region:
We will prove that, on the boundary, the vector field point inside, as shown by the arrows.
The boundary is made up of 6 segments, but, by symmetry,
(since the region is symmetric and the equation is odd), it is enough to consider 3 segments:
x = 3, 3 y 6
x = 3, 3 y 6:
we may pick ~ = (1, 0), so
y = 6, 0 x 3
dx dy
,
dt dt
~n =
dx
dt
y = x + 6, 3 x 0 .
x3
,
3
we obtain:
dx
= y 6 0.
dt
Therefore, we know that the vector field points to the left, on this boundary segment.
We still need to make sure that things do not escape through a corner, though. In other words, we
need to check that, on the corners, there cannot be any arrows as the red ones.
At the top corner, x = 3, y = 6, we have dy/dt = 3 < 0, so that the corner arrow must point down,
and hence SW, so we are OK. At the bottom corner, also dy/dt = 3 < 0, and dx/dt = 9, so the
vector field at that point also points inside.
y = 6, 0 x 3:
we may pick ~ = (0, 1), so
dx dy
,
dt dt
~n =
dy
= x 0 ,
dt
and corners are also OK (for example, at (0, 6): dx/dt = 6 > 0).
y = x + 6, 3 x 0:
We pick the outward normal ~ = (1, 1) and take dot product:
y + x x3 /3
1
= 2x y + x3 /3 .
x
1
72
8.5
Bendixsons Criterion
There is a useful criterion to help conclude there cannot be any periodic orbit in a given a simplyconnected (no holes) region D:
If the divergence of the vector field is everywhere positive48 or is everywhere negative inside D,
then there cannot be a periodic orbit inside D.
Sketch of proof (by contradiction):
Suppose that there is some such periodic orbit, which describes a simple closed curve C.
f (x, y)
Recall that the divergence of F (x, y) =
is defined as:
g(x, y)
f
g
+
.
x y
The Gauss Divergence Theorem (or Greens Theorem) says that:
Z Z
Z
div F (x, y) dxdy =
~n F
D
(the right-hand expression is the line integral of the dot product of a unit outward normal with F ).49
Now, saying that C is an orbit means that F is tangent to C, so the dot product is zero, and therefore
Z Z
div F (x, y) dxdy = 0 .
D
But, if div F (x, y) is everywhere positive, then the integral is positive, and we get a contradiction.
Similarly if it is everywhere negative.
Example: dx/dt = x, dy/dt = y. Here the divergence is = 2 everywhere, so there cannot exist any
periodic orbits (inside any region).
It is very important to realize what the theorem does not say:
Suppose that we take the example dx/dt = x, dy/dt = y. Since the divergence is identically zero,
the Bendixson criterion tells us nothing. In fact, this is a linear saddle, so we know (for other reasons)
that there are no periodic obits.
On the other hand, for the example dx/dt = y, dy/dt = x, which also has divergence identically
zero, periodic orbits exist!
48
73
8.6
74
Hopf Bifurcations
Mathematically, periodic orbits often arise from the Hopf Bifurcation phenomenon.
The Hopf (or Poincare-Andronov-Hopf) bifurcation occurs when a pair of complex eigenvalues
crosses the imaginary axis as a parameter is moved (and, in dimensions, bigger than two, the remaining eigenvalues have negative real part), provided that some additional technical conditions hold.
(These conditions tend to be satisfied in examples.)
It is very easy to understand the basic idea.
We consider a system:
dx
= f (x)
dt
in which a parameter appears.
We assume that the system has dimension two.
Suppose that there are a value 0 of this parameter, and a steady state x0 , with the following properties:
For < 0 , the linearization at the steady state x0 is stable, and there is a pair of complex conjugate
eigenvalues with negative real part.
As changes from negative to positive, the linearization goes through having a pair of purely
imaginary eigenvalues (at = 0 ) to having a pair of complex conjugate eigenvalues with positive
real part.
Thus, near x0 , the motion changes from a stable spiral to an unstable spiral as crosses 0 .
If the steady state happens to be a sink even when = 0 , it must mean that there are nonlinear terms
pushing back towards x0 (see the example below).
These terms will still be there for > 0 , 0 .
Thus, the spiraling-out trajectories cannot go very far, and a limit cycle is approached.
(Another way to think of this is that, in typical biological problems, trajectories cannot escape to
infinity, because of conservation of mass, etc.)
In arbitrary dimensions, the situation is similar. One assumes that all other n 2 eigenvalues have
negative real part, for all near 0 .
The n 2 everywhere-negative eigenvalues have the effect of pushing the dynamics towards a twodimensional surface that looks, near x0 , like the space spanned by the two complex conjugate eigenvectors corresponding to the purely imaginary eigenvalues at = 0 .
On this surface, the two-dimensional argument that we just gave can be applied.
Let us give more details.
Consider the example that we met earlier:
x 1 = x1 x2 + x1 (x21 + x22 )
x 2 = x1 + x2 + x2 (x21 + x22 )
With = 1, this is the supercritical Hopf bifurcation case in which we go, as already shown, from
a globally asymptotically stable equilibrium to a limit cycle as crosses from negative to positive (0
is zero).
75
Now suppose given a general system (I will not ask questions in tests about this material; it is merely
FYI)50 :
x = f (x, )
in dimension 2, where is a scalar parameter and f is assumed smooth. Suppose that for all
near zero there is a steady-state (), with eigenvalues () = r() i(), with r(0) = 0 and
(0) = 0 > 0, and that r0 (0) 6= 0 (eigenvalues cross the imaginary axis with nonzero velocity)
and that the quantity defined below is nonzero. Then, up to a local topological equivalence and
time-reparametrization, one can reduce the system to the form given in the previous example, and
there is a Hopf bifurcation, supercritical or subcritical depending on = the sign of .51 There is
no need to perform the transformation, if all we want is to decide if there is a Hopf bifurcation. The
general recipe is as follows.
Let A be the Jacobian of f evaluated at 0 = (0), = 0. and find two complex vectors p, q such that
Aq = i0 q , AT p = i0 p , p q = 1 .
Compute the dot product H(z, z) = p F (0 + zq + zq, (0)) and consider the formal Taylor series:
X 1
H(z, z) = i0 z +
gjk z j zk .
j!k!
j+k2
Then =
50
1
Re (ig20 g11 + 0 g21 ).
202
See e.g. Yu.A. Kuznetsov. Elements of Applied Bifurcation Theory. 2nd ed., Springer-Verlag, New York, 1998
One may interpret the condition on in terms of a Lyapunov function that guarantees stability at = 0, for the
supercritical case; see e.g.: Mees , A.I. Dynamics of Feedback Systems, John Wiley & Sons, New York, 1981.
51
76
One may use the following Maple commands, which are copied from NLDV computer session XI:
Using Maple to analyse Andronov-Hopf bifurcation in planar ODEs, by Yu.A. Kuznetsov, Mathematical Institute, Utrecht University, November 16, 1999. They are illustrated by the following
chemical model (Brusselator):
x 1 = A (B + 1)x1 + x21 x2 , x 2 = Bx1 x21 x2
where one fixes A > 0 and takes B as a bifurcation parameter. The conclusion is that at B = 1 + A2
the system exhibits a supercritical Hopf bifurcation.
restart:
with(linalg):
readlib(mtaylor):
readlib(coeftayl):
F[1]:=A-(B+1)*X[1]+X[1]2*X[2];
F[2]:=B*X[1]-X[1]2*X[2];
J:=jacobian([F[1],F[2]],[X[1],X[2]]);
K:=transpose(J);
sol:=solve({F[1]=0,F[2]=0},{X[1],X[2]});
assign(sol);
T:=trace(J);
diff(T,B);
sol:=solve({T=0},{B});
assign(sol);
assume(A>0);
omega:=sqrt(det(J));
ev:=eigenvects(J,radical);
q:=ev[1][3][1];
et:=eigenvects(K,radical);
P:=et[2][3][1];
s1:=simplify(evalc(conjugate(P[1])*q[1]+conjugate(P[2])*q[2]));
c:=simplify(evalc(1/conjugate(s1)));
p[1]:=simplify(evalc(c*P[1]));
p[2]:=simplify(evalc(c*P[2]));
simplify(evalc(conjugate(p[1])*q[1]+conjugate(p[2])*q[2]));
F[1]:=A-(B+1)*x[1]+x[1]2*x[2];
F[2]:=B*x[1]-x[1]2*x[2];
# use z1 for the conjugate of z:
x[1]:=evalc(X[1]+z*q[1]+z1*conjugate(q[1]));
x[2]:=evalc(X[2]+z*q[2]+z1*conjugate(q[2]));
H:=simplify(evalc(conjugate(p[1])*F[1]+conjugate(p[2])*F[2]));
# get Taylor expansion:
g[2,0]:=simplify(2*evalc(coeftayl(H,[z,z1]=[0,0],[2,0])));
g[1,1]:=simplify(evalc(coeftayl(H,[z,z1]=[0,0],[1,1])));
g[2,1]:=simplify(2*evalc(coeftayl(H,[z,z1]=[0,0],[2,1])));
alpha:=factor(1/(2*omega2)*Re(I*g[2,0]*g[1,1]+omega*g[2,1]));
evalc(alpha);
# above needed to see that this is a negative number (so supercritical)
8.7
77
Let us consider this system, which is exactly as in our version of the van der Pol oscillator, except
that, before, we had = 1:
dx
x3
= y+x
dt
3
dy
= x
dt
We are interested specifically in what happens when is positive but small (0 < 1).
Notice that then y changes slowly.
So, we may think of y as a constant in so far as its effect on x (the faster variable) is concerned.
How does
dx
dt
= fa (x) = a + x
x3
3
behave?
fa (x) = a + x
x3
3
for a = 1, 0, 32 , 1
a = 1
-
r
-
-
-
-
a=0
-
a = 2/3
-
a = 2/3+
-
a=1
Now let us consider what the solution of the system of differential equations looks like, if starting at
a point with x(0) 0 and y(0) 1.
Since y(t) 1 for a long time, x sees the equation dx/dt = f1 (x), and therefore x(t) wants to
approach a negative steady state xa (approximately at 2)
(If y would be constant, indeed x(t) xa .)
However, a is not constant, but it is slowly increasing (y 0 = x > 0).
Thus, the equilibrium that x is getting attracted to is constantly moving closer and closer to 1,
until, at exactly a = 2/3, the low equilibrium dissappears, and there is only the large one (around
x = 2); thus x will quickly converge to that larger value.
Now, however, x(t) is positive, so y 0 = x < 0, that is, a starts decreasing.
78
Repeating this process, one obtains a periodic motion in which slow increases and decreases are
interspersed with quick motions.
This is what is often called a relaxation (or hysteresis-driven) oscillation.
Here are computer plot of x(t) for one such solution, together the same solution in phase-plane:
8.8
Now let us use the information that is small: this means that
dy/dt is always very small compared to dx/dt, i.e., the arrows are (almost) horizontal,
except very close to the graph of y=f (x), where both are small (exactly vertical, when y=f (x)):
79
Now, suppose that the nullclines look exactly as in these pictures, so that f 0 < 0 and g 0 > 0 at the
steady state.
f (x) y
The Jacobian of
is
(g(x) y)
0
f (x0 ) 1
g 0 (x0 )
and therefore (remember that f 0 (x0 ) < 0) the trace is negative, and the determinant is positive (because g 0 (x0 ) > 0), and the steady state is a sink (stable).
Thus, we expect trajectories to look like this:
Observe that a large enough perturbation from the steady state leads to a large excursion (the trajectory is carried very quicky to the other side) before the trajectory can return.
In contrast, a small perturbation does not result in such excursions, since the steady state is stable.
Zooming-in:
This type of behavior is called excitability: low enough disturbances have no effect, but when over a
threshold, a large reaction occurs.
In contrast, suppose that the nullcline y = g(x) intersects the nullcline y = f (x) on the increasing
part of the latter (f 0 > 0).
Then, the steady state is unstable, for small , since the trace is f 0 (x0 ) f 0 (x0 ) > 0.
We then get a relaxation oscillation, instead of an excitable system:
8.9
80
Neurons
Neurons are nerve cells; there are about 100 billion (1011 ) in the human brain.
Neurons may be short (1mm) or very long (1m from the spinal cord to foot muscles).
Each neuron is a complex information processing device, whose inputs are neurotransmitters (electrically charged chemicals) which accumulate at the dendrites.
Neurons receive signals from other neurons (from as many as 150,000, in the cerebral cortex, the
center of cognition) connected to it at synapses.
When the net voltage received by a neuron is higher than a certain threshold (about 1/10 of a volt), the
neuron fires an action potential, which is an electrical signal that travels down the axon, sort of an
output wire of the neuron. Signals can travel at up to 100m/s; the higher speeds are achieved when
the axon is covered in a fatty insulation (myelin).
At the ends of axons, neurotransmitters are released into the dendrites of other neurons.
Information processing and computation arise from these networks of neurons.
The strength of synaptic connections is one way to program networks; memory (in part) consists of
finely tuning these strengths.
The mechanism for action potential generation is well understood. A mathematical model given in:
Hodgkin, A.L. and Huxley, A.F., A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve, Journal of Physiology 117 (1952): 500-544 won the
authors a Nobel Prize (in 1963), and is still one of the most successful examples of mathematical
modeling in biology. Let us sketch it next.
8.10
81
The basic premise is that currents are due to Na and K ion pathways. Normally, there is more K+
inside than outside the cell, and the opposite holds for Na+ . Diffusion through channels works against
this imbalance, which is maintained by active pumps (which account for about 2/3 of the cells energy
consumption!). These pumps act against a steep gradient, exchanging 3 Na+ ions out for each 2 K+
that are allowed in. An overall potential difference of about 70mV is maintained (negative inside the
cell) when the cell is at rest.
A neuron can be stimulated by external signals (touch, taste, etc., sensors), or by an appropriate
weighted sum of inhibitory and excitatory inputs from other neurons through dendrites (or, in the
Hodgkin-Huxley and usual lab experiments, artificially with electrodes).
A large enough potential change triggers a nerve impulse (action potential or spike), starting from
the axon hillock (start of axon) as follows:
(1) voltage-gated Na+ channels open (think of a gate opening); these let sodium ions in, so the
inside of the cell becomes more positive, and, through a feedback effect, even more gates open;
(2) when the voltage difference is +50mV, voltage-gated K+ channels open and quickly let potassium out;
(3) the Na+ channels close;
(4) the K+ channels close, so we are back to resting potential.
The Na+ channels cannot open again for some minimum time, giving the cell a refractory period.
This activity, locally in the axon, affects neighboring areas, which then go through the same process,
a chain-reaction along the axon. Because of the refractory period, the signal cannot go back, and a
direction of travel for the signal is well-defined. See an animation in the course website:
http://www.math.rutgers.edu/ sontag/336/finlay-markham-chain-action-potentials.gif
82
It is important to realize that the action potential is only generated if the stimulus is large enough. It is
an all or (almost) nothing response. An advantage is that the signal travels along the axon without
decay - it is regenerated along the way. The binary (digital) character of the signal makes it very
robust to noise.
There is another aspect that is remarkable, too: a continuous stimulus of high intensity will result in a
higher frequency of spiking. Amplitude modulation (as in AM radio) gets transformed into frequency
modulation (as in FM radio, which is far more robust to noise).
8.11
Model
The basic HH model is for a small segment of the axon. Their model was done originally for the giant
axon of the squid (large enough to stick electrodes into, with the technology available at the time), but
similar models have been validated for other neurons.
(Typical simulations put together perhaps thousands of such basic compartments, or alternatively set
up a partial differential equation, with a spatial variable to represent the length of the axon.)
The model has four variables: the potential difference v(t) between the inside and outside of the
neuron, and the activity of each of the three types of gates (two types of gates for sodium and one
for potassium). These activities may be thought of as relative fractions (concentrations) of open
channels, or probabilities of channels being open. There is also a term I for the external current being
applied.
C v
m (v)m
n (v)n
h (v)h
=
=
=
=
83
gK (t) = gK n(t)4
gN a (t) = gN a m(t)3 h(t)
The equation for v comes from a capacitor model of membranes as charge storage elements. The
three first terms in the right correspond to the currents flowing through the Na and K gates (plus an
additional L that accounts for all other gates and channels, not voltage-dependent).
The currents are proportional to the difference between the actual voltage and the Nernst potentials
for each of the species (the potential that would result in balance between electrical and chemical
imbalances), multiplied by conductances g that represent how open the channels are.
The conductances, in turn, are proportional to certain powers of the open probabilities of the different
gates. (The powers were fit to data, but can be justified in terms of cooperativity effects.)
The open probabilities, in turn, as well as the time-constants ( s) depend on the current net voltage
difference v(t). H&H found the following formulas by fitting to data. Let us write:
1
(m (v) m) = m (v)(1 m) m (v)m
m (v)
(so that dm/dt = m (v)(1 m) m (v)m), and similarly for n, h. In terms of the s and s,
H&Hs formulas are as follows:
v
25 v
v
, h (v) = 0.07 exp
,
m (v) = 0.1
, m (v) = 4 exp
25v
18
20
exp 10 1
h (v) =
1
exp
30v
10
+1
, n (v) = 0.01
exp
10 v
10v
10
1
v
80
where the constants are gK = 36, gN a = 120, gL = 0.3 vN a = 115 vK = 12, and vL = 10.6.
The way in which H&H did this fit is, to a large extent, the best part of the story. Basically, they
performed a voltage clamp experiment, by inserting an electrode into the axon, thus permitting a
plot of current against voltage, and deducing conductances for each channel. (They needed to isolate
the effects of the different channels; the experiments are quite involved, and we dont have time to go
over them in this course.)
For an idea of how good the fits are, look at these plots of experimental gK (V )(t) and gN a (V )(t), for
different clamped V s (circles) compared to the model predictions (solid curves).
84
Here are the plots of n, m, h in response to a stimulus at t = 5 of duration 1sec, with current=0.1:
85
There are two stable steady states: vr (resting) and ve (excited), as well as a saddle vs . Depending
on where the initial voltage (set by a transient current I) is relative to a separatrix, trajectories converge
as t to either the excited state or stay near the resting one.
(Of course, h, n are not really constant, so the analysis must be complemented with consideration of
small changes in h, n. We do not provide details here.)
An alternative view, on a longer time scale, is also possible. FitzHugh observed (and you will, too,
in an assigned project; see also the graph shown earlier) that : h(t) + n(t) 0.8, constant during an
action potential. (Notice the approximate symmetry of h, n in plots.) This allows one to eliminate
h from the equations. Also, assuming that m 1 (because we are looking at a longer time scale),
we may replace m(t) by its quasi-steady state value m (v). We end up with a new two-dimensional
system:
C v =
gK n4 (v vK ) gN a m (v)3 (0.8 n)(v vN a ) gL (v vL )
n (v)n = n (v) n
which has these nullclines (dots for n=0,
86
We have fast behaviors on the horizontal direction (n=constant), leading to v approaching nullclines
fast, with a slow drift on n that then produces, as we saw earlier when studying a somewhat simpler
model of excitable behavior, a spike of activity.
Note that if the nullclines are perturbed so that they now intersect in the middle part of the cubiclooking curve (for v, this would be achieved by considering the external current I as a constant),
then a relaxation oscillator will result. Moreover, if the perturbation is larger, so that the intersection
is away from the elbows, the velocity of the trajectories should be higher (because trajectories do
not slow-down near the steady state). This explains frequency modulation as well.
Much of the qualitative theory of relaxation oscillations and excitable systems originated in the analysis of this example and its mathematical simplifications.
87
PDE Models
One may also study space-dependence of a particular protein in a single cell. For example, this
picture53 shows the gradients of G-proteins in response to chemoattractant binding to receptors in the
surface of Dictyostelium discoideum amoebas:
9.1
Densities
We write space variables as x=(x1 , x2 , x3 ) (or just (x, y) in dimension 2, or (x, y, z) in dimension 3).
We will work with densities c(x, t), which are understood intuitively in the following sense.
Suppose that we denote by C(R, t) the amount of a type of particle (or number of individuals, mass
of proteins of a certain type, etc.) in a region R of space, at time t.
Then, the density around point x, at time t, c(x, t), is:
c(x, t) =
53
C(R, t)
vol(R)
from Jin, Tian, Zhang, Ning, Long, Yu, Parent, Carole A., Devreotes, Peter N., Localization of the G Protein
Complex in Living Cells During Chemotaxis, Science 287(2000): 1034-1036.
88
9.2
We will assume that, at each point in space, there might take place a reaction that results in particles
(individuals, proteins, bacteria, whatever) being created (or destroyed, depending on the sign).
This production (or decay) occurs at a certain rate (x, t) which, in general, depends on the location
x and time t. (If there is no reaction, then (x, t) = 0.)
For scalar c, s will typically be a formation or degradation rate.
More generally, if one considers vectors c(x, t), with the coordinates of c representing for example
the densities of different chemicals, then (x, t) would represent the reactions among chemicals that
happen to be in the same place at the same time.
The rate is a rate per unit volume per unit of time. That is, if (R, [a, b]) is number of particles
created (eliminated, if < 0) in a region R during time interval [a, b], then the average rate of growth
is:
(R, [t, t + t])
(x, t) =
,
vol(R) t
for small cubes R around x and small time increments t. This means that
Z bZ Z Z
(R, [a, b]) =
(x, t) dx dt
a
9.3
89
positive x
Z
Z
~
Z
R
x1
x2
#
R R
We also need the following formulas, which follow from y z = A:
ZZZ
Z x2
C(R, t) =
c(~x, t) d~x =
c(x, t)A dx ,
R
x1
Z bZ Z Z
(R, [a, b]) =
Z bZ
x2
(~x, t) d~xdt =
a
x1
PP
P
C
CW
Jin
Jout
3
I
JJ
]
C
CW
"!
"!
x + x
Z
net flow through cross-area at x:
Jin =
J(x, )A d
t
Z
net flow through cross-area at x + x:
Z
net creation (elimination):
Jout =
t+t
t+t
J(x + x, )A d
t
x+x
(, )A dd
t
"!
Z
starting amount in segment:
90
Ct =
c(, t)A d
x
Z
ending amount in segment:
x+x
Ct+t =
c(, t + t)A d.
x
t+t
t+t
x+x
(J(x, ) J(x + x, )) A d +
(, )A dd .
t
So, dividing by At, letting t 0, and applying the Fundamental Theorem of Calculus:
Z x+x
Z x+x
c
(, t)d .
(, t) d = J(x, t) J(x + x, t) +
t
x
x
Finally, dividing by x, taking x 0, and once again using the FTC, we conclude:
J
c
=
+
t
x
This is the basic equation that we will use from now on.
We only treated the one-dimensional (i.e., uniform cross-section) case. However, the general case,
when R is an arbitrary region in 3-space (or in 2-space) is totally analogous. One must define the flux
J(x, t) as a vector which indicates the maximal-flow direction at (x, t); its magnitude indicates the
number of particles crossing, per unit time, a unit area perpendicular to J.
One derives, using Gauss theorem, the following equation:
c
= div J +
t
where the divergence of J = (J1 , J2 , J3 ) at x = (x1 , x2 , x3 ) is
div J = J =
In the scalar case, div J is just
J
,
x
J1 J2 J3
+
+
.
x1 x2 x3
of course.
Until now, everything was quite abstract. Now we specialize to very different types of fluxes.
9.4
Transport Equation
We start with the simplest type of equation, the transport (also known as the convection or the
advection equation55 ).
55
In meteorology, convection and advection refer respectively to vertical and horizontal motion; the Latin origin is
advectio = act of bringing.
91
We consider here flux is due to transport: a transporting tape as in an airport luggage pick-up, wind
carrying particles, water carrying a dissolved substance, etc.
The main observation is that, in this case:
flux = concentration velocity
(depending on local conditions: x and t).
The following pictures may help in understanding why this is true.
flow direction; say constant speed
-
larger flux
smaller flux
)
u
u
u
u
u
PP
?
u
u
unit area
PP
q
P
u
u
u
u
u
u
flow at v = 3 units/sec
Imagine a counter that clicks when each particle passes by the right endpoint. The total flux in one
second is 15 units. In other words, it equals cv. This will probably convince you of the following
formula:
J(x, t) = c(x, t) v(x, t)
Since
c
t
or, equivalently:
c
(cv)
+
=
t
x
or, equivalently:
c
+ div (cv) =
t
92
This equation describes collective behavior, that of individual particles just going with the flow.
Later, we will consider additional (and more interesting!) particle behavior, such as random movement, or movement in the direction of food. Typically, many such effects will be superimposed into
the formula for J.
A special case is that of constant velocity v(x, t) v. For constant velocities, the above simplifies to:
c
t
c
= v x
+
or, equivalently:
c
c
+ v
=
t
x
= = v div c +
or, equivalently:
c
+ div c =
t
9.5
Let us take the even more special case in which the reaction is linear: = c. This corresponds to a
decay or growth that is proportional to the population (at a given time and place). The equation is:
c
c
+v
= c
t
x
( > 0 growth, < 0 decay).
Theorem: Every solution (in dimension 1) of the above equation is of the form:
c(x, t) = et f (x vt)
for some (unspecified) differentiable single-variable function f .
Conversely, et f (x vt) is a solution, for any and f .
93
Notice that, in particular, when t = 0, we have that c(x, 0) = f (x). Therefore, the function f plays
the role of an initial condition in time (but which depends, generally, on space).
The last part of the theorem is very easy to prove, as we only need to verify the PDE:
t
e f (x vt) vet f 0 (x vt) + vet f 0 (x vt) = et f (x vt) .
Proving that the only solutions are these is a little more work:
c
we must prove that every solution of c
+ v x
= c, where v and are given real constants), must
t
t
have the form c(x, t) = e f (x vt), for some appropriate f .
We start with the very special case v = 0. In this case, for each fixed x, we have an ODE:
c
t
= c.
Clearly, for each x, this ODE has the unique solution c(x, t) = e c(x, 0), so we can take f (x) as the
function c(x, 0).
The key step is to reduce the general case to this case, by traveling along the solution.
Formally, given a solution c(x, t), we introduce a new variable z = x vt, so that x = z + vt, and
we define the auxiliary function (z, t) := c(z + vt, t).
We note that
(z, t)
z
c
(z
x
c
c
(z, t) = v
(z + vt, t) +
(z + vt, t) .
t
x
t
c
We now use the PDE v x
= c c
to get:
t
c
c
(z, t) = c
+
= c(z + vt, t) = (z, t) .
t
t
t
We have thus reduced to the case v = 0 for ! So, (z, t) = et (z, 0). Therefore, substituting back:
c(x, t) = (x vt, t) = et (x vt, 0) .
We conclude that
c(x, t) = et f (x vt)
as claimed (writing f (z) := (z, 0)).
Thus, all solutions are traveling waves, with decay or growth depending on the sign of .
These are typical figures, assuming that v = 3 and that = 0 and < 0 respectively (snapshots taken
at t = 0, 1, 2):
94
To determine uniquely c(x, t) = et f (x vt), need to know what the initial condition f is.
This could be done in various ways, for instance by specifying an initial distribution c(x, 0), or by
giving the values c(x0 , t) at some point x0 .
Example: a nuclear plant is leaking radioactivity, and we measure a certain type of radioactive particle
by a detector placed at x = 0. Let us assume that the signal detected is described by the following
function:
0
t<0
h(t) =
,
1
t
0
1+t
the wind blows eastward with constant velocity v = 2 m/s and particles decay with rate 3 s1 ( =
3). What is the solution c(x, t)?
We know that the solution is c(x, t) = e3t f (x 2t), but what is f ?
We need to find f . Let us write the dummy-variable argument of f as z so as not to get confused
with x and t. So we look for a formula for f (z). After we found f (z), well substitute z = x 2t.
Since at position x = 0 we have that c(0, t) = h(t), we know that h(t) = c(0, t) = e3t f (2t), which
is to say, f (2t) = e3t h(t).
We wanted f (z), so we substitute z = 2t, and then obtain (since t = z/2):
f (z) = e3(z/2) h(z/2) .
To be more explicit, let us substitute the definition of h. Note that t 0 is the same as z 0.
Therefore, we have:
3z/2
e
z0
f (z) =
1 z/2
0
z>0
Finally, we conclude that the solution is:
e3x/2
t x/2
c(x, t) =
1 + t x/2
0
t < x/2
where we used the following facts: z = x2t 0 is equivalent to t x/2, e3t e(3/2)(x2t) = e3x/2 ,
and 1 (x 2t)/2 = 1 + t x/2.
We can now answer more questions. For instance: what is the concentration at position x = 10 and
time t = 6? The answer is
e15
c(10, 6) =
.
2
Homework Problems
1. Suppose c(x, t) is the density of bacterial population being carried east by a wind blowing at 4
mph. The bacteria reproduce exponentially, with a doubling time of 5 hours.
(a) Find the density c(x, t) in each of these cases:
(1) c(x, 0) 1
(5) c(0, t) 1
1
1+x2
95
2. Prove the following analog of the theorem in dimension 3 (the constant velocity v is now a vector):
c(x, y, z, t) = f (x v1 t, y v2 t, z v3 t)et .
(Hint: use (x, y, z, t) = c(x + v1 t, y + v2 t, z + v3 t).)
9.6
Attraction, Chemotaxis
Chemotaxis is the term used to describe movement in response to chemoattractants or repellants, such
as nutrients and poisons, respectively.
Perhaps the best-studied example of chemotaxis involves E. coli bacteria. In this course we will not
study the behavior of individual bacteria, but will concentrate instead on the evolution equation for
population density. However, it is worth digressing on the topic of individual bacteria, since it is so
fascinating.
A Digression
E. coli bacteria are single-celled organisms, about 2 m long, which possess up to six flagella for
movement.
Chemotaxis in E. coli has been studied extensively. These bacteria can move in basically two modes:
a tumble mode in which flagella turn clockwise and reorientation occurs, or a run mode in which
flagella turn counterclockwise, forming a bundle which helps propel them forward.
Basically, when the cell senses a change in nutrient in a certain direction, it runs in that direction.
When the sensed change is very low, a tumble mode is entered, with random reorientations, until
a new direction is decided upon. One may view the bacterium as performing a stochastic gradient
search in a nutrient-potential landscape. These are pictures of runs and tumbles performed by E.
coli:
The runs are biased, drifting about 30 deg/s due to viscous drag and asymmetry. There is very little
inertia (very low Reynolds number). The mean run interval is about 1 second and the mean tumble
interval is about 1/10 sec.
96
The motors actuating the flagella are made up of several proteins. In the terms used by Harvards
Howard Berg56 , they constitute a nanotechnologists dream, consisting as they do of engines, propellers, . . . , particle counters, rate meters, [and] gear boxes. These are an actual electron micrograph
and a schematic diagram of the flagellar motor:
The signaling pathways involved in E. coli chemotaxis are fairly well understood. Aspartate or other
nutrients bind to receptors, reducing the rate at which a protein called CheA (Che for chemotaxis)
phosphorylates another protein called CheY transforming it into CheY-P. A third protein, called CheZ,
continuously reverses this phosphorylation; thus, when ligand is present, there is less CheY-P and
more CheY. Normally, CheY-P binds to the base of the motor, helping clockwise movement and hence
tumbling, so the lower concentration of CheY-P has the effect of less tumbling and more running
(presumably, in the direction of the nutrient).
A separate feedback loop, which includes two other proteins, CheR and CheB, causes adaptation to
constant nutrient concentrations, resulting in a resumption of tumbling and consequent re-orientation.
In effect, the bacterium is able to take derivatives, as it were, and decide which way to go.
There are many papers (ask instructor for references if interested) describing biochemical models of
how these proteins interact and mathematically analyzing the dynamics of the system.
Modeling how Densities Change due to Chemotaxis
Let us suppose given a function V = V (x) which denotes the concentration of a food source or
chemical (or friends, or foes), at location57 x.
56
97
We think of V as a potential function, very much as with an electromagnetic or force field in physics.
The basic principle that we wish to model is: the population is attracted toward places where V is
larger.
We often assume that either V (x) 0 for all x or V (x) 0 for all x.
We use the positive case to model attraction towards nutrient.
If V has negative values, then movement towards larger values of V means movement away from
places where V is large in absolute value, that is to say, repulsion from such values, which might
represent the locations of high concentrations of poisons or predator populations.
To be more precise: we will assume that individuals (in the population of which c(x, t) measures the
density) move at any given time in the direction in which V (x) increases the fastest when taking a
small step, and with a velocity that is proportional58 to the perceived rate of change in magnitude of
V.
We recall from multivariate calculus that V (x+x)V (x) maximized in the direction of its gradient.
The proof is as follows. We need to find a direction, i.e., unit vector u, so that V (x + hu) V (x)
is maximized, for any small stepsize h.
We take a linearization (Taylor expansion) for h > 0 small:
V (x + hu) V (x) = [V (x) u] h + o(h) .
This implies the following formula for the average change in V when taking a small step:
1
V = V (x) u + O(h) V (x) u
h
and therefore the maximum value is obtained precisely when the vector u is picked in the same
direction as V . Thus, the direction of movement is given by the gradient of V .
The magnitude of the vector h1 V is the approximately V (x). Thus, our assumptions give us that
chemotaxis results in a velocity V (x) proportional to V (x).
Since, in general, flux = densityvelocity, we conclude:
J(x, t) = c(x, t) V (x)
for some , so that the obtained equation (ignoring reaction or transport effects) is:
c
= div ( c V )
t
or, equivalently:
c
+ div ( c V ) = 0
t
or, equivalently:
c
( c V 0 )
+
= 0
t
x
This is not always reasonable! Some other choices are: there is a maximum speed at which one can move, or
movement is only possible at a fixed speed. See the homework problem.
98
Homework problem: Give an example of an equation that would model this situation: the speed of
movement is an increasing function of the norm of the gradient, but is bounded by some maximal
possible speed.
Of course, one can superimpose not only reactions but also different effects, such as transport, to this
basic equation; the fluxes due to each effect add up to a total flux.
Example
Air flows (on a plane) Northward at 3 m/s, carrying bacteria. There is a food source as well, placed at
x = 1, y = 0, which attracts according to the following potential:
V (x, y) =
(x
1
+ y2 + 1
1)2
and
V
y
.
= 2
y
((x 1)2 + y 2 + 1)2
4
c
+
8
+
4
16
c
3
2
3
2
2
3
t
x
N
N
N
y N
N
y
where we wrote N = (x 1)2 + y 2 + 1.
Here is a homework problem:
Problem che1
We are given this chemotaxis equation (one space dimension) for the concentration of a microorganism (assuming no additional reactions, transport, etc):
!
c
c
(2 x 6)
1
(2 x 6)2
=
2c
3
2 .
t
x 2 + (x 3)2 2
2 + (x 3)2
2 + (x 3)2
(1) What is the potential function? (Give a formula for it.)
(2) Where (at x =?) is the largest amount of food?
(Answers on website.)
59
We assume that the food is not being carried by the wind, but stays fixed. (How would you model a situation where
the food is also being carried by the wind?) Also, this model assumes that the amount of food is large enough that we
need not worry about its decrease due to consumption by the bacteria. (How would you model food consumption?)
99
Some Intuition
Let us develop some intuition regarding the chemotaxis equation, at least in dimension one.
Suppose that we study what happens at a critical point of V . That is, we take a point for which
V 0 (x0 ) = 0. Suppose, further, that the concavity of V at that point is down: V 00 (x0 ) < 0. Then,
c
(x0 , t) > 0, because:
t
c
c
(x0 , t) = (x0 , t) V 0 (x0 ) cV 00 (x0 ) = 0 cV 00 (x0 ) > 0 .
t
x
In other words, the concentration at such a point increases in time. Why is this so, intuitively?
Answer: the conditions V 0 (x0 ) = 0, V 00 (x0 ) > 0 characterize a local maximum of V . Therefore,
nearby particles (bacteria, whatever it is that we are studying) will move toward this point x0 , and the
concentration there will increase in time.
Conversely, if V 00 (x0 ) > 0, then the formula shows that c
(x0 , t) < 0, that is to say, the density
t
decreases. To understand this intuitively, we can think as follows.
The point x0 is a local minimum of V . Particles that start exactly at this point would not move, but any
nearby particles will move uphill towards food. Thus, as nearby particles move away, the density at
x0 , which is an average over small segments around x0 , indeed goes down.
Next, let us analyze what happens when V 0 (x0 ) > 0 and V 00 (x0 ) > 0, under the additional assumption
c
that x
(x0 , t) 0, that is, we assume that the density c(x, t) is approximately constant around x0 .
Then
c
c
(x0 , t) = (x0 , t) V 0 (x0 ) cV 00 (x0 ) cV 00 (x0 ) < 0 .
t
x
How can we interpret this inequality?
This picture of what the graph of V around x0 looks like should help:
The derivative (gradient) of V is less to the left of x0 than to the right of x0 , because V 00 > 0 means
that V 0 is increasing. So, the flux is less to the left of x0 than to its right. This means that particles to
the left of x0 are arriving to the region around x0 much slower than particles are leaving this region in
the rightward direction. So the density at x0 diminishes.
Homework: analyze, in an analogous manner:
(a) V 0 (x0 ) > 0, V 00 (x0 ) < 0
(b) V 0 (x0 ) < 0, V 00 (x0 ) > 0.
10
100
Diffusion
Diffusion is one of the fundamental processes by which particles (atoms, molecules, even bigger
objects) move.
Ficks Law, proposed in 1855, and based upon experimental observations, postulated that diffusion is
due to movement from higher to lower concentration regions. Mathematically:
J(x, t) c(x, t)
(we use for proportional).
This formula applies to movement of particles in a solution, where the proportionality constant will
depend on the sizes of the molecules involved (solvent and solute) as well as temperature. It also
applies in many other situations, such as for instance diffusion across membranes, in which case the
constant depends on permeability and thickness as well.
The main physical explanation of diffusion is probabilistic, based on the thermal motion of individual particles due to the environment (e.g., molecules of solvent) constantly kicking the particles.
Brownian motion, named after the botanist Robert Brown, refers to such random thermal motion.
One often finds the claim that Brown in his 1828 paper observed that pollen grains suspended
in water move in a rapid but very irregular fashion.
However, in Natures 10 March 2005 issue (see also errata in the 24 March issue), David
Wilkinson states: . . . several authors repeat the mistaken idea that the botanist Robert Brown
observed the motion that now carries his name while watching the irregular motion of pollen
grains in water. The microscopic particles involved in the characteristic jiggling dance Brown
described were much smaller particles. I have regularly studied pollen grains in water suspension
under a microscope without ever observing Brownian motion.
From the title of Browns 1828 paper A Brief Account of Microscopical Observations ... on
the Particles contained in the Pollen of Plants..., it is clear that he knew he was looking at smaller
particles (which he estimated at about 1/500 of an inch in diameter) than the pollen grains.
Having observed vivid motion in these particles, he next wondered if they were alive, as they
had come from a living plant. So he looked at particles from pollen collected from old herbarium
sheets (and so presumably dead) but also found the motion. He then looked at powdered fossil
plant material and finally inanimate material, which all showed similar motion.
Browns observations convinced him that life was not necessary for the movement of these
microscopic particles.
The relation to Ficks Law was explained mathematically in Einsteins Ph.D. thesis (1905).60
When diffusion acts, and if there are no additional constraints, the eventual result is a homogeneous
concentration over space. However, usually there are additional boundary conditions, creation and
absorption rates, etc., which are superimposed on pure diffusion. This results in a trade-off between
the smoothing out effects of diffusion and other influences, and the results can be very interesting.
We should also remark that diffusion is often used to model macroscopic situations analogous to
movement of particles from high to low density regions. For example, a human population may shift
towards areas with less density of population, because there is more free land to cultivate.
60
A course project asks you to run a java applet simulation of Einsteins description of Brownian motion.
101
We have that J(x, t) = D c(x, t), for some constant D called the diffusion coefficient. Since, in
general, c
= div J, we conclude that:
t
c
= D2 c
t
where 2 is the Laplacian (often ) operator:
2
c
c
2c
2c
= D
+
+
.
t
x21 x22 x23
The notation 2 originates as follows: the divergence can be thought of as dot product by . So
(c) is written as 2 c. This is the same as the heat equation in physics (which studies
diffusion of heat).
Note that the equation is just:
c
2c
= D 2
t
x
in dimension one.
Let us consider the following very sketchy probabilistic intuition to justify why it is reasonable that the
flux should be proportional to the gradient of the concentration, if particles move at random. Consider
the following picture:
p1
2
- p1
2
p2
2
- p2
2
p1
p2
x + x
particles particles
We assume that, in some small interval of time t, particles jump right or left with equal probabilities,
so half of the p1 particles in the first box move right, and the other half move left. Similarly for the p2
particles in the second box. (We assume that the jumps are big enough that particles exit the box in
which they started.)
The net number of particles (counting rightward as positive) through the segment shown in the middle
is proportional to p21 p22 , which is proportional roughly to c(x, t) c(x + x, t). This last difference,
c
in turn, is proportional to x
.
This argument is not really correct, because we have said nothing about the velocity of the particles
and how they relate to the scales of space and time. But it does intuitively help on seeing why the flux
is proportional to the negative of the gradient of c.
A game can help understand. Suppose that students in a classroom all initially sit in the front rows, but
then start to randomly (and repeatedly) change chairs, flipping coins to decide if to move backward (or
forward if they had already moved back). Since no one is sitting in the back, initially there is a net flux
towards the back. Even after a while, there will be still less students flipping coins in the back than in
the front, so there are more possibilities of students moving backward than forward. Eventually, once
that the students are well-distributed, about the same number will move forward as move backward:
this is the equalizing effect of diffusion.
10.1
102
It is often said that diffusion results in movement proportional to t. The following theorem gives
one way to make that statement precise. A different interpretation is in the next section, and later, we
will discuss a probabilistic interpretation and relations to random walks as well.
Theorem. Suppose that c satisfies diffusion equation
c
2c
= D 2.
t
x
Assume also that the following hold:
Z
C=
c(x, t) dx
+ Z +
c
c
= x2
2x dx
x
x
Z +
Z +
+
= [2xc] +
2c dx = 2
c(x, t) dx = 2 C
Canceling C, we obtain:
d 2
(t) = 2D
dt
and hence, integrating over t, we have, as wanted:
2 (t) = 2Dt + 2 (0) .
If, in particular, particles start concentrated in a small interval around x = 0, we have that c(x, 0) = 0
for all |x| > . then (with c = c(x, 0)):
Z +
Z +
Z +
2
2
2
x c dx =
x c dx
c dx = 2 C
so (0) = 0.
10.2
103
There are many ways to state precisely what is meant by saying that diffusion takes time r2 to move
distance r. As diffusion is basically a model of a population of individuals which move randomly,
one cannot talk about any particular particle, bacterium, etc. One must make a statement about the
whole population. One explanation is in terms of the second moment of the density c, as done earlier.
Another one is probabilistic, and one could also argue in terms of the Gaussian fundamental solution.
We sketch another one next.
2
c
Suppose that we consider the diffusion equation c
= D x
2 for x R, and an initial condition at
t
t = 0 which is a step function, a uniform population density of one in the interval (, 0] and zero
for x > 0. It is quite intuitively clear that diffusion will result in population densities that look like
the two subsequent figures, eventually converging to a constant value of 0.5.
Consider, for any given coordinate point p > 0, a time T = T (p) for which it is true that (let us say)
c(p, T ) = 0.1. It is intuitively clear (we will not prove it) that the function T (p) is increasing on p:
for those points p that are farther to the right, it will take longer for the graph to rise enough. So, T (p)
is uniquely defined for any given p. We sketch now a proof of the fact that T (p) is proportional to p2 .
Suppose that c(x, t) is a solution of the diffusion equation, and, for any given positive constant r,
introduce the new function f defined by:
f (x, t) = c(rx, r2 t) .
Observe (chain rule) that
f
t
= r2 c
and
t
2f
x2
c
= r2 x
2 . Therefore,
c
2f
2c
f
2
D 2 = r D D 2 = 0.
t
x
t
x
In other words, the function f also satisfies the same equation. Moreover, c and f have the same
initial condition: f (x, 0) = c(rx, 0) = 1 for x 0 and f (x, 0) = c(rx, 0) = 0 for x > 0. Therefore
f and c must be the same function.61 In summary, for every positive number r, the following scaling
law is true:
c(x, t) = c(rx, r2 t) x, t .
For any p > 0, if we plug-in r = p, x = 1, and t = T (p)/p2 in the above formula, we obtain that:
c(1, T (p)/p2 ) = c(p.1, p2 .(T (p)/p2 )) = c(p, T (p)) = 0.1 ,
and therefore T (1) = T (p)/p2 , that is, T (p) = p2 for some constant.
Some Homework Problems
(1) In dimension 2, compute the Laplacian in polar coordinates. That is, write
f (r, , t) = c(r cos , r sin , t) ,
61
Of course, uniqueness of solutions requires a proof. The fact that f and c satisfy the same boundary conditions at
infinity is used in such a proof, which we omit here.
104
so that f is really the same as the function c, but thought of as a function of magnitude, argument, and
time. Prove that:
2f
1 f
1 2f
2
( c)(r cos , r sin , t) =
+
+
r2
r r r2 2
(all terms on the RHS evaluated at r, , t). Writing f just as c (but remembering that c is now viewed
as a function of (r, , t)), this means that the diffusion equation in polar coordinates is:
c
2 c 1 c
1 2c
=
+
+
.
t
r2 r r r2 2
Conclude that, for radially symmetric c, the diffusion equation in polar coordinates is:
c
D
c
=
r
t
r r
r
It is also possible
to prove that for spherically symmetric c in three dimensions, the Laplacian is
1
2 c
r r .
r2 r
2. (harder and optional) Show that, under analogous conditions to those in the theorem shown for
dimension 1, in dimension d (e.g.: d = 2, 3) one has the formula:
2 (t) = 2dDt + 2 (0)
(for d = 1, this is the same as previously). The proof will be completely analogous, except that the
first step in integration by parts (uv 0 = (uv)0 u0 v, which is just the Leibniz rule for derivatives) must
be generalized to vectors (use that acts like a derivative) and the second step (the Fundamental
Theorem of Calculus) should be replaced by an application of Gauss divergence theorem.
3. Prove that (for n = 1), the following function is a particular solution of the diffusion equation:
c0 (x, t) =
x2
C
e 4Dt
4Dt
(where C is any constant). Also, verify that, indeed for this example, 2 (t) = 2Dt.
In dimension n = 3 (or even any other dimension), there is a similar formula. If you have access to
Maple or Mathematica, check that the following function is a solution, for t > 0:
c0 (x, t) =
C
(4Dt)3/2
r2
e 4Dt
e 4Dt f () d
c(x, t) =
4Dt
solves the diffusion equation for t > 0, and has the initial condition c(x, 0) = f (x).
62
This is the convolution c0 f of f with the Greens function c0 for the PDE
10.3
105
Separation of Variables
X 00 (x)
T 0 (t)
=
X(x)
T (t)
Now define:
:=
x, t .
T 0 (0)
T (0)
so:
X 00 (x)
T 0 (0)
=
=
X(x)
T (0)
for all x (since the above equality holds, in particular, at t = 0). Thus, we conclude, applying the
equality yet again:
X 00 (x)
T 0 (t)
D
=
= x, t
X(x)
T (t)
D
(with k =
).
10.4
106
Suppose that a set of particles undergo diffusion (e.g., bacteria doing a purely random motion) inside
a thin tube.
The tube is open at both ends, so part of the population is constantly being lost (the density of the
organisms outside the tube is small enough that we may take it to be zero).
We model the tube in dimension 1, along the x axis, with endpoints at x = 0 and x = :
t
c0
t
t t
t
t
t
t
t
t
t t
t t t t t t t t
x=0
c0
t
x=
We model the problem by a diffusion (for simplicity, we again take D=1) with boundary conditions:
2c
c
=
, c(0, t) = c(, t) = 0 .
t
x2
Note that c identically zero is always a solution. Lets look for a bounded and nonzero solution.
Solution: we look for a c(x, t) of the form X(x)T (t). As we saw, if there is such a solution, then
then there is a number so that X 00 (x) = X(x) and T 0 (t) = T (t) for all x, t, so, in particular,
T (t) = et T (0). Since we were asked to obtain a bounded solution, the only possibility is 0
(otherwise, T (t) as t ).
It cannot be that = 0. Indeed, if that were to be the case, then X 00 (x) = 0 implies that X is a line:
X(x) = ax + b. But then, the boundary conditions X(0)T (t) = 0 and X()T (t) = 0 for all t imply
that ax + b = 0 at x = 0 and x = , giving a = b = 0, so X 0, but we are looking for a nonzero
solution.
We write = k 2 , for some k > 0 and consider the general form of the X solution:
X(x) = a sin kx + b cos kx .
The boundary condition at x = 0 can be used to obtain more information:
X(0)T (t) = 0 for all t X(0) = 0 b = 0 .
Therefore, X(x) = a sin kx, and a 6= 0 (otherwise, c 0). Now using the second boundary condition:
X()T (t) = 0 for all t X() = 0 sin k = 0
Therefore, k must be an integer (nonzero, since otherwise c 0).
We conclude that any separated-form solution must have the form
2
c(x, t) = a ek t sin kx
for some nonzero integer k. One can easily check that, indeed, any such function is a solution. (Do it
as a homework problem!).
Moreover, since the diffusion equation is linear, any linear combination of solutions of this form is
also a solution.
For example,
5e9t sin 3x 33e16t sin 4x
is a solution of our problem.
107
integer
solves the equation. Since the initial condition has the two frequencies 5, 8, we should obviously try
for a solution of the form:
c(x, t) = a5 e25t sin 5x + a8 e64t sin 8x .
We find the coefficients by plugging-in t = 0:
c(x, 0) = a5 sin 5x + a8 sin 8x = 3 sin 5x 2 sin 8x .
So we take a5 = 3 and a8 = 2; and thus obtain finally:
c(x, t) = 3e25t sin 5x 2e64t sin 8x .
One can prove, although we will not do so in this course, that this is the unique solution with the given
boundary and initial conditions.
P
This works in exactly the same way whenever the initial condition is a finite sum
P k ak sin kx. Ignoring questions of convergence, the same idea even works for an infinite sum k=0 ak sin kx. But
what if initial condition is not a sum of sines? A beautiful area of mathematics, Fourier analysis, tells
us that it is possible to write any function defined on an interval as an infinite sum of this form. This
is analogous to the idea of writing any function of x (not just polynomials) as a sum of powers xi .
You saw such expansions (Taylor series) in a calculus course.
The theory of expansions into sines and cosines is more involved (convergence of the series must be
interpreted in a very careful way), and we will not say anything more about that topic in this course.
Here are some pictures of approximations, though, for an interval of the form [0, 2]. In each picture,
we see a function together with various approximants consisting of sums of an increasing number of
sinusoidal functions (red is constant; orange is a0 + a1 sin x, etc).
108
Another Example
Suppose now that, in addition to diffusion, there is a reaction. A population of bacteria grows exponentially inside the same thin tube that we considered earlier, still also moving at random.
Question: was is the smallest possible growth rate which guarantees that the population can grow?
The question, mathematically, is: for what growth rates are there unbounded solutions of this
problem?:
2c
c
=
+ c , c(0, t) = c(, t) = 0 .
t
x2
We only address here the easier question: for what s is there some unbounded solution of the
separated form c(x, t) = X(x)T (t)?
We follow the same idea as earlier:
X(x)T 0 (t) = X 00 (x)T (t) + X(x)T (t)
for all x, t, so there must exist some real number so that:
X 00 (x)
T 0 (t)
=
+ = .
T (t)
X(x)
This gives us the coupled equations:
T 0 (t) = T (t)
X 00 (x) = ( )X(x)
with boundary conditions X(0) = X() = 0.
It must be the case that 0, since otherwise T (t) = et T (0) 0 as t 0, or T (t) is constant,
and the solution would not be unbounded.
We claim that, also, it must be true that and < , since otherwise one cannot satisfy the boundary
conditions. We prove this inequality by contradiction.
109
Suppose that 0. Then there is a real number such that 2 = and X satisfies the
equation X 00 = 2 X.
If = 0, then the equation says that X = a + bx for some a, b. But X(0) = X() = 0 would then
imply a = b = 0, so X 0 and the solution is identically zero (and so not unbounded).
So let us assume that 6= 0. Thus:
X = aex + bex
and, using the two boundary conditions, we have a + b = ae + be = 0, or in matrix form:
1
1
a
= 0.
e
e
b
Since
det
1
e
= e e = e (1 e2 ) 6= 0 ,
a e(k )t sin kx
with a nonzero integer k such that k < , and some constant a 6= 0, and, conversely, every such
function (or a linear combination thereof) is a solution (check!). If c represents a density of a population, a separable solution only makes sense if k = 1, since otherwise there will be negative values;
however, sums of several such terms may well be positive.
Homework: Under what conditions is there an unbounded (separated form) solution of:
2c
c
+ c,
=
t
x2
c(0, t) = c(1, t) = 0 ?
Provide the general form of such solutions. What about boundary condition
c
(1, t)
x
= 0?
2 2
(Answers: > 2 , a e(k )t sin kx. In the second case, we need that > 2 /4 and get the
2
solution ae((k+1/2)) t sin(k + 1/2)x.)
10.5
Suppose that the tube in the previous examples is closed at the end x = L (a similar argument applies
if it is closed at x = 0). We assume that, in that case, particles bounce at a wall placed at x = L.
110
One models this situation by a no flux or Neumann boundary condition J(L, t) 0, which, for the
c
pure diffusion equation, is the same as x
(L, t) 0.
One way to think of this is as follows. Imagine a narrow strip (of width ) about the wall. For very
small , most particles bounce back far into region, so the flux at x = L is 0.
-
-
Another way to think of this is using the reflecting boundary method. We replace the wall by a virtual
wall and look at equation in a larger region obtained by adding a mirror image of the original region.
Every time that there is a bounce, we think of the particle as continuing to the mirror image section.
Since everything is symmetric (we can start with a symmetric initial condition), clearly the net flow
across this wall balances out, so even if individual particles would exit, on the average the same
number leave as enter, and so the population density is exactly the same as if no particles would exit.
As we just said, the flux at the wall must be zero, again explaining the boundary condition.
10.6
Probabilistic Interpretation
111
For Gaussians, the mean distance from zero (up a to constant factor) coincides with the standard
deviation:
Z
2
2
2
E(|X|) =
xex /(2 ) dx =
2 0
p
(substitute u = x/), and similarly in any dimension for E( x21 + . . . + x2d ).
t.
10.7
112
for some (which depends on the c0 that you chose). (Hint: solve
C
4Dt
x2
J.G. Skellam, Random dispersal in theoretical populations, Biometrika 38: 196-218, 1951.
113
equipopulation contours would have been perfect circles. Obviously, terrain conditions and locations
of cities make these contours not be perfect circles.) Notice the match to the prediction of a linear
dependence on time.
The third figure is an example64 for the spread of Japanese beetles Popillia japonica in the Eastern
United States, with invasion fronts shown.
Remark. Continuing on the topic of the Remark in page 92, suppose that each particle in a population
evolves according to a differential equation dx/dt = f (x, t)+w, where w represents a noise effect
which, in the absence of the f term, would make the particles undergo purely random motion and the
population density satisfy the diffusion equation with diffusion coefficient D. When both effects are
superimposed, we obtain, for the density an equation like c/t = div (cf )+D2 c. This is usually
called a Fokker-Planck equation. (To be more precise, the Fokker-Planck equation describes a more
general situation, in which the noise term affects the dynamics in a way that depends on the current
value of x. Well work out details in a future version of these notes.)
10.8
Systems of PDEs
Of course, one often must study systems of partial differential equations, not just single PDEs.
We just discuss one example, that of diffusion with growth and nutrient depletion, since the idea
should be easy to understand. This example nicely connects with the material that we started the
course with.
We assume that a population of bacteria, with density n(x, t), move at random (diffusion), and in addition also reproduce with a rate K(c(x, t)) that depends on the local concentration c(x, t) of nutrient.
The nutrient is depleted at a rate proportional to its use, and it itself diffuses. Finally, we assume that
there is a linear death rate kn for the bacteria.
A model is:
n
= Dn 2 n + (K(c) k)n
t
c
= Dc 2 c K(c)n
t
where Dn and Dc are diffusion constants. The function K(c) could be, for example, a Michaelisk
c
Menten rate K(c) = kmax
n +c
You should ask yourself, as a homework problem, what the equations would be like if c were to
denote, instead, a toxic agent, as well as formulate other variations of the idea.
Another example, related to this one, is that of chemotaxis with diffusion. We look at this example
later, in the context of analyzing steady state solutions.
64
from M.A. Lewis and S. Pacala, Modeling and analysis of stochastic invasion processes, J. Mathematical Biology 41,
387-429, 2000
11
114
dX
dt
The vector field is only interesting near such states. One studies their stability, often using linearizations, in order to understand the behavior of the system under small perturbations from the
steady state, and also as a way to gain insight into the global behavior of the system.
For a partial differential equation of the form c
= F (c, cx , cxx , . . .), where cx , etc., denote partial
t
derivatives with respect to space variables, or more generally for systems of such equations, one may
also look for steady states, and steady states also play an important role.
It is important to notice that, for PDEs, in general finding steady states involves not just solving an
algebraic equation like F (X) = 0 in the ODE case, but a partial differential equation. This is because
setting F (c, cx , cxx , . . .) to zero is a PDE on the space variables. The solution will generally be a
function of x, not a constant. Still, the steady state equation is in general easier to solve; for one thing,
there are less partial derivatives (no c
).
t
For example, take the diffusion equation, which we write now as:
c
= L(c)
t
and where L is the operator L(c) = 2 c. A steady state is a function c(x) that satisfies L(c) = 0,
that is,
2 c = 0
(subject to whatever boundary conditions were imposed). This is the Laplace equation.
We note (but we have no time to cover in the course) that one may study stability for PDEs via
spectrum (i.e., eigenvalue) techniques for a linearized system, just as done for ODEs.
To check if a steady state c0 of c
= F (c) is stable, one linearizes at c = c0 , leading to c
= Ac,
t
t
and then studies the stability of the zero solution of c
=
Ac.
To
do
that,
in
turn,
one
must
find
the
t
eigenvalues and eigenvectors (now eigen-functions) of A (now an operator on functions, not a matrix),
that is, solve
Ac = c
(and appropriate boundary conditions) for nonzero functions c(x) and real numbers . There are
many theorems in PDE theory that provide analogues to stability of a linear PDE is equivalent to all
eigenvalues having negative real part. To see why you may expect such theorems to be true, suppose
that we have found a solution of Ac = c, for some c 6 0. Then, the function
c(x, t) = et c(x)
c
also solves the equation:
= A
c. So, if for example, > 0, then |
c(t, x)| for those points
t
x where c(x) 6= 0, as t , and the zero solution is unstable. On the other hand, if < 0, then
c(t, x) 0.
For the Laplace equation, it is possible to prove that there are a countably infinite number of eigenvalues. If we write L = 2 c (the negative is more usual in mathematics, for reasons that we will
not explain here), then the eigenvalues of L form a sequence 0 < 0 < 1 < . . ., with n
as n , when Dirichlet conditions (zero at boundary) are imposed, and 0 = 0 < 1 < . . .
115
when Neumann conditions (no-flux) are used. The eigenvectors that one obtains for domains that are
intervals are the trigonometric functions that we found when solving by separation of variables (the
eigenvalue/eigenvector equation, for one space variable, is precisely X 00 (x) + X = 0).
In what follows, we just study steady states, and do not mention stability. (However, the steady states
that we find turn out, most of them, to be stable.)
11.1
Many problems in biology (and other fields) involve the following situation. We have two regions, R
and S, so that R wraps around S. A substance, such as a nutrient, is at a constant concentration,
equal to c0 , on the exterior of R. It is also constant, equal to some other value cS (typically, cS = 0)
in the region S. In between, the substance diffuses. See this figure:
c
t
= 2 c
c cS
S
R
exterior c c0
Examples abound in biology. For example, R might be a cell membrane, the exterior the extra-cellular
environment, and S the the cytoplasm.
In a different example, R might represent the cytoplasm and S the nucleus.
Yet another variation (which we mention later) is that in which the region R represents the immediate
environment of a single-cell organism, and the region S is the organism itself.
In such examples, the external concentration is taken to be constant because one assumes that nutrients
are so abundant that they are not affected by consumption. The concentration in S is also assumed
constant, either because S is very large (this is reasonable if S would the cytoplasm and R the cell
membrane) or because once nutrients enter S they get absorbed immediately (and so the concentration
in S is cS = 0).
Other examples typically modeled in this way include chemical transmitters at synapses, macrophages
fighting infection at air sacs in lungs, and many others.
In this Section, we only study steady states, that is, we look for solutions of 2 c = 0 on R, with
boundary conditions cS and c0 .
Dimension 1
We start with the one-dimensional case, where S is the interval [0, a], for some a 0, and R is the
interval [a, L], for some L > a.
We view the space variable x appearing in the concentration c(x, t) as one dimensional. However, one
could also interpret this problem as follows: S and R are cylinders, there is no flux in the directions
orthogonal to the x-axis, and we are only interested in solutions which are constant on cross-sections.
116
6
no flux
ct = D2 c
c(0, t) cS
x=a
c(L, t) c0
x=L
no flux
?
The steady-state problem is that of finding a function c of one variable satisfying the following ODE
and boundary conditions:
d2 c
D 2 = 0 , c(a) = cS , c(L) = c0 .
dx
00
Since c = 0, c(x) is linear, and fitting the boundary conditions gives the following unique solution:
c(x) = cS + (c0 cS )
Notice that, therefore, the gradient of c is
dc
dx
xa
.
La
c0 cS
.
La
Since, in general, the flux due to diffusion is Dc, we conclude that the flux is, in steady-state, the
following constant:
D
(c0 cS ) .
J =
La
D
Suppose that c0 > cS . Then J < 0. In other words, an amount La
(c0 cS ) of nutrient transverses
(from right to the left) the region R = [a, L] per unit of time and per unit of cross-sectional area.
This formula gives an Ohms law for diffusion across a membrane when we think of R as a cell
membrane. To see this, we write the above equality in the following way:
cS c0 = J
La
D
which makes it entirely analogous to Ohms law in electricity, V = IR. We interpret the potential
difference V as the difference between inside and outside concentrations, the flux as current I, and the
resistance of the circuit as the length divided by the diffusion coefficient. (Faster diffusion or shorter
length results in less resistance.)
Radially Symmetric Solutions in Dimensions 2 and 3
In dimension 2, we assume now that S is a disk of radius a and R is a washer with outside radius L.
For simplicity, we
take the concentration in S to be cS = 0.
'$
R
Sk
&%
Since the boundary conditions are radially symmetric, we look for a radially symmetric solution, that
is, a function c that depends only on the radius r.
Recalling the formula for the Laplacian as a function of polar coordinates (previous homework), the
diffusion PDE is:
c
D
c
=
r
, c(a, t) = 0 , c(L, t) = c0 .
t
r r
r
117
Since we are looking only for a steady-state solution, we set the right-hand side to zero and look for
c = c(r) such that
(rc0 )0 = 0 c(a) = 0 , c(L) = c0 ,
where prime indicates derivative with respect to r.
Homework: show that the solution is
c(r) = c0
ln(r/a)
.
ln(L/a)
Similarly, in dimension 3, taking S as a ball of radius a and R as the spherical shell with inside radius
a and outside radius L, we have:
D
c
2 c
= 2
r
, c(t, a) = 0 , c(L, t) = c0
t
r r
r
Homework: show that the solution is
a
Lc0
1
.
c(r) =
La
r
Notice the different forms of the solutions in dimensions 1, 2, and 3.
In the dimension 3 case, the derivative of c in the radial direction is, therefore:
c0 (r) =
Lc0 a
.
(L a)r2
We now specialize to the example in which the region R represents the environment surrounding a
single-cell organism, the region S is the organism itself, and c models nutrient concentration.
We assume that the concentration of nutrient is constant far away from the organism, let us say farther
than distance L, and L a.
Then c0 (r) =
c0 a
(1a/L)r2
c0 a
.
r2
In general, the steady-state flux due to diffusion, in the radial direction, is Dc0 (r). In particular, on
the boundary of S, where r = a, we have:
J =
Dc0
.
a
Thus J is the amount of nutrient that passes, in steady state, through each unit of area of the boundary, per unit of time. (The negative sign because the flow is toward the inside, i.e. toward smaller r,
since J < 0.)
Since the boundary of S is a sphere of radius a, it has surface area 4a2 . Therefore, nutrients enter S
at a rate of
Dc0
4a2 = 4Dc0 a
a
per unit of time.
118
On the other hand, the metabolic need is roughly proportional to the volume of the organism. Thus,
the amount of nutrients needed per unit of time is:
4 3
a M ,
3
where M is the metabolic rate per unit of volume per unit of time.
For the organism to survive, enough nutrients must pass through its boundary. If diffusion is the only
mechanism for nutrients to come in, then survivability imposes the following constraint:
4Dc0 a
4 3
a M ,
3
that is,
r
a acritical =
3Dc0
.
M
Phytoplankton65 are free-floating aquatic organisms, and use bicarbonate ions (which enter by diffusion) as a source of carbon for photosynthesis, consuming one mole of bicarbonate per second per
cubic meter of cell. The concentration of bicarbonate in seawater is about 1.5 moles per cubic meter,
and D 1.5 109 m2 s1 . This gives
11.2
A very often used model that combines diffusion and chemotaxis is due to Keller and Segel. The
model simply adds the diffusion and chemotaxis fluxes. In dimension 1, we have, then:
c
c
0
= div J =
cV D
.
t
x
x
We assume that the bacteria live on the one-dimensional interval [0, L] and that no bacteria can enter
or leave through the endpoints. That is, we have no flux on the boundary:66
J(0, t) = J(L, t) = 0 t .
Let us find the steady states.
Setting
c
t
= J
= 0, and viewing now c as a function of x alone, and using primes for
x
J = c V 0 Dc0 = J0
d
,
dx
gives:
(some constant) .
65
We borrow this example from M. Denny and S. Gaines, Chance in Biology, Princeton University Press, 2000. The
authors point out there that the metabolic need is more accurately proportional, for multicellular organisms, to (mass)3/4 ,
but it is not so clear what the correct scaling law is for unicellular ones.
c
c
66
Notice that this is not the same as asking that x
(0, t) = x
(L, t) = 0. The density might be constant near a boundary,
but this does not mean that the population will not get redistributed, since there is also movement due to chemotaxis. Only
c
c
for a pure diffusion, when J = D x
, is no-flux the same as x
= 0.
119
Since J0 = 0, because it vanishes at the endpoints, we have that (ln c)0 = c0 /c = (V /D)0 , and
therefore
c = k exp(V /D)
for some constant. Thus, the steady state concentration is proportional to the exponential of the
nutrient concentration, which is definitely not something that would be obvious.
11.3
Facilitated Diffusion
Let us now work out an example67 involving a system of PDEs, diffusion, chemical reactions, and
quasi-steady state approximations.
Myoglobin68 is a protein that helps in the transport of oxygen in muscle fibers. The binding of oxygen
to myoglobin results in oxymyoglobin, and this binding results in enhanced diffusion.
The facilitation of diffusion is somewhat counterintuitive, because the Mb molecule is much larger
than oxygen (about 500 times larger), and so diffuses slower. A mathematical model helps in understanding what happens, and in quantifying the effect.
In the model, we take a muscle fibre to be one-dimensional, and no flux of Mb and MbO2 in or out.
(Only unbound oxygen can pass the boundaries.)
s(0, t) s0
s = O2 , e = M b, c = M bO2
x=0
s(L, 0) sL s0
x=L
O2 + M b
67
M bO2
Borrowing from J.P. Keener and J. Sneyd, Mathematical Physiology, Springer-Verlag New York, 1998.
From Protein Data Bank, PDB, http://www.rcsb.org/pdb/molecules/mb3.html:
myoglobin is where the science of protein structure really began. . . John Kendrew and his coworkers determined the
atomic structure of myoglobin, laying the foundation for an era of biological understanding
The iron atom at the center of the heme group holds the oxygen molecule tightly. Compare the two pictures. The first
shows only a set of thin tubes to represent the protein chain, and the oxygen is easily seen. But when all of the atoms in
the protein are shown in the second picture, the oxygen disappears, buried inside the protein.
So how does the oxygen get in and out, if it is totally surrounded by protein? In reality, myoglobin (and all other proteins)
are constantly in motion, performing small flexing and breathing motions. Temporary openings constantly appear and
disappear, allowing oxygen in and out. The structure in the PDB is merely one snapshot of the protein, caught when it is
in a tightly-closed form
68
120
with equations:
s
2s
= Ds 2 + k c k+ se
t
x
e
2e
= De 2 + k c k+ se
t
x
2c
c
= Dc 2 k c + k+ se ,
t
x
where we assume that De = Dc (since M b and M bO2 have comparable sizes). The boundary condie
c
tions are x
= x
0 at x = 0, L, and s(0) = s0 , s(L) = sL .
We next do a steady-state analysis of this problem, setting:
Ds sxx + k c k+ se = 0
De exx + k c k+ se = 0
Dc cxx k c + k+ se = 0
Since De = Dc , we have that (e + c)xx 0.
So, e + c is a linear function of x, whose derivative is zero at the boundaries (no flux).
Therefore, e + c is constant, let us say equal to e0 .
On the other hand, adding the first and third equations gives us that
(Ds sx + Dc cx )x = Ds sxx + Dc cxx = 0 .
This means that Ds sx + Dc cx is also constant:
Ds sx + Dc cx = J .
Observe that J is the the total flux of oxygen (bound or not), since it is the sum of the fluxes Ds sx
of s = O2 and Dc cx of c = M bO2 .
Let f (x) = Ds s(x) + Dc c(x). Since f 0 = J, it follows that f (0) f (L) = JL, which means:
J =
Ds
Dc
(s0 sL ) +
(c0 cL )
L
L
(where one knows the oxygen concentrations s0 and sL , but not necessarily c0 and cL ).
We will next do a quasi-steady state approximation, under the hypothesis that Ds is very small compared to the other numbers appearing in:
Ds sxx + k c k+ s(e0 c) = 0
and this allows us to write69
c = e0
s
K +s
Changing variables = (k+ /k )s, u = c/e0 , and y = x/L, one obtains yy = (1 u) u, = Ds /(e0 k+ L2 ).
A typical value of is estimated to be 107 . This says that (1 u) u 0, and from here one can solve for u as a
function of , or equivalently, c as a function of s.
69
121
where K = k /k+ . This allows us, in particular, to substitute c0 in terms of s0 , and cL in terms of sL ,
in the above formula for the flux, obtaining:
Ds
sL
Dc
s0
J =
.
(s0 sL ) +
e0
L
L
K + s0 K + sL
This formula exhibits the flux as sum of the Ohms law term plus plus a term that depends on
diffusion constant Dc of myoglobin.
(Note that this second term, which quantifies the advantage of using myoglobin, is positive, since
s/(K + s) is increasing.)
With a little more work, which we omit here70 , one can solve for c(x) and s(x), using the quasisteady state approximation. These are the graphs that one obtains, for the concentrations and fluxes
respectively, of bound and free oxygen (note that the total flux J is constant, as already shown):
An intuition for why myoglobin helps is as follows. By binding to myoglobin, there is less free
oxygen near the left endpoint. As the boundary conditions say that the concentration is s0 outside,
there is more flow into the cell (diffusion tends to equalize). Similarly, at the other end, the opposite
happens, and more flows out.
11.4
Density-Dependent Dispersal
Here is yet another example71 of modeling with a system of PDEs and steady-state calculations.
Suppose that the flux is proportional to cc, not to c as with diffusion: a transport-like equation,
where the velocity is determined by the gradient. In the scalar case, this would mean that the flux is
proportional to ccx , which is the derivative of c2 . Such a situation would occur if, for instance,
overcrowding encourages more movement.
To make the problem even more interesting, assume that there are two interacting populations, with
densities u and v respectively, and each moves with a velocity that is proportional to the gradient of
the total population u + v.
We obtain these equations:
u
= (u(u + v))
t
70
71
122
v
= (v(u + v))
t
and, in particular, in dimension 1:
(u + v)
u
=
u
t
x
x
v
(u + v)
=
v
.
t
x
x
Let us look for steady states: u = u(x) and v = v(x) solving (with u0 =
u
,
x
v0 =
v
):
x
12
123
It is rather interesting that reaction-diffusion systems can exhibit traveling-wave behavior. Examples
arise from systems exhibiting bistability, such as the developmental biology examples considered
earlier, or, in a more complicated system form, for species competition.
The reason that this is surprising is that diffusion times tend to scale like the square root of distance,
not linearly. (But we have seen a similar phenomenon when discussing diffusion with exponential
growth.)
We illustrate with a simple example, the following equation:
V
2V
=
+ f (V )
t
x2
where f is a function that has zeroes at 0, , 1, < 1/2, and satisfies:
f 0 (0) < 0, f 0 (1) < 0, f 0 () > 0
so that the differential equation dV /dt = f (V ) by itself, without diffusion, would be a bistable
system.72
We would like to know if theres any solution that looks like a traveling front moving to the left (we
could also ask about right-moving solutions, or course).
72
Another classical example is that in which f represents logistic growth. That is the Fisher equation, which is used in
genetics to model the spread in a population of a given allele.
124
In other words, we look for V (x, t) such that, for some waveform U that travels at some speed c,
V can be written as a translation of U by ct:
V (x, t) = U (x + ct) .
In accordance with the above picture, we also want that these four conditions hold:
V (, t) = 0 , V (+, t) = 1 , Vx (, t) = 0 , Vx (+, t) = 0 .
The key step is to realize that the PDE for V induces an ordinary differential equation for the waveform
U , and that these boundary conditions constrain what U and the speed c can be.
To get an equation for U , we plug-in V (x, t) = U (x + ct) into Vt = Vxx + f (V ), obtaining:
cU 0 = U 00 + f (U ) .
Furthermore, V (, t) = 0, V (+, t) = 1, Vx (, t) = 0, Vx (+, t) = 0 translate into:
U () = 0 , U (+) = 1 , U 0 () = 0 , U 0 (+) = 0 .
The theory can be developed quite generally, but here well only study in detail this very special case:
f (V ) = A2 V (V )(V 1)
which is easy to treat with explicit formulas.
Since U will satisfy U 0 = 0 when U = 0, 1, we guess the functional relation:
U 0 () = BU () (1 U ())
(note that we are looking for a U satisfying 0 U 1, so 1 U 0). We write for the argument
of U so as to not confuse it with x.
We substitute U 0 = BU (1 U ) and (taking derivatives of this expression)
U 00 = B 2 U (1 U )(1 2U )
into the differential equation cU 0 = U 00 + A2 U (U )(U 1), and cancel U (U 1), obtaining (do
the calculation as a homework problem):
B 2 (2U 1) + cB A2 (U ) = 0 .
125
Since U is not constant (because U () = 0 and U (+) = 1), this means that we can compare
coefficients of U in this expression, and conclude: that 2B 2 A2 = 0 and B 2 + cB + A2 = 0.
Therefore:
(1 2)A
B = A/ 2 , c =
.
2
Substituting back into the differential equation for U , we have:
A
U 0 = BU (1 U ) = U (1 U ) ,
2
an ODE that now does not involve the unknown B. We solve this ODE by separation of variables and
partial fractions, using for example U (0) = 1/2 as an initial condition, getting:
1
A
U () =
1 + tanh
2
2 2
(obtain this solution, as a homework problem). Finally, since V (x, t) = U (x + ct), we conclude that:
1
A
V (x, t) =
1 + tanh (x + ct)
2
2 2
where c =
(12)A
.
2
Observe that the speed c was uniquely determined; it will be larger if 0, or if the reaction is
stronger (larger A). This is not surprising! (Why?)
General Case
To study the general case (f not the explicit cubic that we used), we look at the following set of two
ODEs for U and its derivative (using 0 for d/d):
U0 = W
W 0 = f (U ) + cW .
The steady states satisfy W = 0 and f (U ) = 0, so they are (0, 0) and (1, 0). The Jacobian is
0
1
J=
f 0 c
and has determinant f 0 < 0 at the steady states, so they are both saddles. The conditions on U translate
into the requirements that:
(U, W ) (0, 0) as and (U, W ) (1, 0) as
for the function U () and its derivative, seen as a solution of this system of two ODEs. (Note that
is now time.) In dynamical systems language, we need to show the existence of an heteroclinic
connection between these two saddles. One first proves that, for c 0 and c 1, there result
trajectories that undershoot or overshoot the desired connection, so, by a continuity argument
(similar to the intermediate value theorem), there must be some value c for which the connection
exactly happens. Details are given in many mathematical biology books.
126