Eigenvalues and Eigenvectors: 5.1 What Is An Eigenvector?

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Chapter 5

Eigenvalues and
Eigenvectors

In this chapter we return to the study of linear transformations that we started


in Chapter 3. The ideas presented here are related to finding the “simplest”
matrix representation for a fixed linear transformation. As you recall, a matrix
representation is determined once the bases for the two vector spaces are picked.
Thus our problem is how to pick these bases.

5.1 What is an Eigenvector?


Before defining eigenvectors and eigenvalues let us look at the linear transfor-
mation L, from R2 to R2 , whose matrix representation is
 
2 0
A=
0 3
We cannot compute L(x1 , x2 ) until we specify which basis G we used. Let’s
assume that G = {gg 1 , g 2 }. Then we know that L(gg 1 ) = 2gg 1 + 0gg 2 = 2gg 1 and
L(gg 2 ) = 0gg 1 + 3gg 2 = 3gg 2 . Thus, L(gg k ) just multiplies g k by the corresponding
element in the main diagonal of A. Figure 5.1 illustrates this. If x = x1g 1 +
x2g 2 , then L(x x) = 2x1 g 1 + 3x2g 2 . Since L multiplies each basis vector by
some constant, it is extremely easy to compute and visualize what the linear
transformation does to R2 . In fact, since scalar multiplication is the simplest
linear transformation possible, we would like to be able to do the following.
Given a linear transformation L from Rn to Rn , find a basis F = {ff 1 , . . . , f n }
such that L(ff k ) = λkf k for k = 1, 2, . . . , n, that is, find n linearly independent
vectors upon which L acts as scalar multiplication. Unfortunately, it is not
always possible to do this. There are, however, large classes of linear transfor-
mations for which it is possible to find such a set of vectors.
These vectors are called eigenvectors and the scalar multipliers λk are called
the eigenvalues of L. The reader should note that the terms characteristic

173
174 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

vector and characteristic value are also used; sometimes the word “proper” is
substituted for characteristic. We formalize this discussion with the following:

L[g1 ]

g1

g2

L[g2 ]

Figure 5.1

Definition 5.1. Let L be a linear transformation that maps a vector space into
itself. A nonzero vector x is called an eigenvector of L if there is a scalar λ such
that L(x x ) = λx
x. The scalar λ is called an eigenvalue of L and the eigenvector
is said to belong to, or correspond to, λ.
OK, we know what we want, eigenvectors. How do we find them? Let’s
examine a 2 × 2 matrix. Let  
2 6
A=
1 3
and suppose that A is the matrix representation of a linear transformation L
with respect to the standard basis. Thus, for any x = (x1 , x2 ) we have
    
2 6 x1 2x1 + 6x2
x
L(x ) = =
1 3 x2 x1 + 3x2

We want to find those numbers λ for which there is a nonzero vector x such
x) = λx
that L(x x . Thus,
   
x x
A 1 =λ 1
x2 x2

or
   
x1 0
(A − λI2 ) =
x2 0
5.1. WHAT IS AN EIGENVECTOR? 175

Hence, we are looking for those numbers λ for which the equation (A−λI2 )x x =0
has a nontrivial solution. But this happens if and only if det(A − λI2 ) = 0. For
this particular A we have
 
2−λ 6
det(A − λI2 ) = det = (2 − λ)(3 − λ) − 6
1 3−λ
= λ2 − 5λ = λ(λ − 5)

The only values of λ that satisfy the equation det(A − λI2 ) = 0 are λ = 0 and
λ = 5. Thus the eigenvalues of L are 0 and 5. An eigenvector of 5, for example,
will be any nonzero vector x in the kernel of A − 5I2 .
In the following pages when we talk about finding the eigenvalues and eigen-
vectors of some n×n matrix A, what we mean is that A is the matrix representa-
tion, with respect to the standard basis in Rn , of a linear transformation L, and
the eigenvalues and eigenvectors of A are just the eigenvalues and eigenvectors
of L.
Example 1. Find the eigenvalues and eigenvectors of the matrix
 
2 6
1 3

From the above discussion we know that the only possible eigenvalues of A are
0 and 5.
λ = 0: We want x = (x1 , x2 ) such that
       
2 6 1 0 x1 0
−0 =
1 3 0 1 x2 0
 
2 6
The coefficient matrix of this system is , and it is row equivalent to the
  1 3
1 3
matrix . The solutions to this homogeneous equation satisfy x1 = −3x2 .
0 0
Therefore, ker(A − 0I2 ) = S[(−3, 1)], and any eigenvector of A corresponding
to the eigenvalue 0 is a nonzero multiple of (−3, 1). As a check we compute
        
−3 2 6 −3 0 −3
A = = =0
1 1 3 1 0 1

λ = 5: We want to find those vectors x such that (A − 5I2 )x x = 0 . This leads


to the equation        
2 6 1 0 x1 0
−5 =
1 3 0 1 x2 0
 
−3 6
The coefficient matrix of this system, , is row equivalent to the matrix
  1 −2
1 −2
. Any solution of this system satisfies x1 = 2x2 . Hence, ker(A −
0 0
176 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

5I2 ) = S[(2, 1)]. All eigenvectors corresponding to the eigenvalue λ = 5 must be


nonzero multiples of (2,1). Checking to see that (2,1) is indeed an eigenvector
corresponding to 5, we have
        
2 2 6 2 10 2
A = = =5 
1 1 3 1 5 1
We summarize the above discussion with the following definition and theorem.
Definition 5.2. Let A be any n × n matrix. Let λ be any scalar. Then the
n × n matrix A − λIn is called the characteristic matrix of A and the nth degree
polynomial p(λ) = det(A − λIn ) is called the characteristic polynomial of A.
The characteristic polynomial is sometimes defined as det(λI − A) =
det[−(−A − λI)] = (−1)n det(A − λI) = (−1)n p(λ). Thus, the two versions
differ by at most a minus sign.
Theorem 5.1. Let A be any n × n matrix. Then λ0 is an eigenvalue of A
with corresponding eigenvector x 0 if and only if det(A − λ0 In ) = 0 and x 0 is a
nonzero vector in ker(A − λ0 In ).
Proof. x 0 is an eigenvector corresponding to λ0 if and only if Axx 0 = λ0x 0 and x 0
is nonzero. But this is equivalent to saying that x 0 is in ker(A − λ0 In ) and x 0 is
nonzero. But if ker(A−λ0 In ) has a nonzero vector in it, then det(A−λ0 In ) = 0.
Thus a necessary condition for λ0 to be an eigenvalue of A is that it is a root
of the characteristic polynomial of A.
A mistake that is sometimes made when trying to calculate the characteristic
polynomial of a matrix is to first find a matrix B, in row echelon form, that is
row equivalent to A and then compute the characteristic polynomial of B. There
is usually no relationship whatsoever between the characteristic polynomials of
A and B.
 
1 1 −2
Example 2. Let A = −1 2 1. Compute the characteristic matrix and
0 1 −1
polynomial of A. Determine the eigenvalues and eigenvectors of A. The char-
acteristic matrix of A is
 
1−λ 1 −2
A − λI3 =  −1 2 − λ 1 
0 1 −1 − λ
The characteristic polynomial of A is det(A − λI3 ) and equals
   
1−λ 1 −2 1−λ 1 −1 − λ
det  −1 2−λ 1  = det  −1 2 − λ 0 
0 1 −1 − λ 0 1 −1 − λ
 
1−λ 0 0
= det  −1 2 − λ 0 
0 1 −1 − λ
= −(λ + 1)(λ − 1)(λ − 2)
5.1. WHAT IS AN EIGENVECTOR? 177

Thus, the eigenvalues of A are −1, 1, and 2. We determine the eigenvectors of


A by finding the nonzero vectors in ker(A −λ0 I3 ), for λ0= −1, 1, and 2.
2 1 −2
λ = −1: The matrix A − (−1)I3 equals −1 3 1 and it is row equiva-
  0 1 0
1 0 −1
lent to the matrix 0 1 0. This implies that ker(A+ I3 ) equals S[(1, 0, 1)].
0 0 0
One easily checks that A(1, 0, 1)T = (−1)(1,
 0, 1)T .
0 1 −2
λ = 1 : A − I3 = −1 1 1, and this matrix is row equivalent to
  0 1 −2
0 1 −2
1 0 −3. Clearly ker(A − I3 ) = S[(3, 2, 1)], and a quick calculation shows
0 0 0
that A(3, 2, 1)T = (3, 2, 1)T. 
−1 1 −2
λ = 2 : A − 2I3 = −1 0 1. This matrix is row equivalent to
0 1 −3
 
1 0 −1
0 1 −3. Thus, ker(A − 2I3 ) = S[(1, 3, 1)]. Computing A(1, 3, 1)T , we
0 0 0
see that it equals 2(1, 3, 1)T . 

If A is an n × n matrix, we’ve defined the characteristic polynomial p(λ) of


A to be det(A − λIn ). If λ1 , λ2 , . . . , λq are the distinct roots of p(λ), then we
may write
p(λ) = (−1)n (λ − λ1 )m1 (λ − λ2 )m2 . . . (λ − λq )mq
where m1 + m2 + · · ·+ mq = n. The multiplicity of an eigenvalue is the exponent
corresponding to that eigenvalue. Thus,the multiplicity
 of λ1 is m1 , that of λ2
2 1 0 0
0 2 1 0 3
being m2 , etc. For example, if A equals 0 0 2 2, then p(λ) = (λ−2) (λ−

0 0 0 4
4). The eigenvalue 2 has multiplicity 3 while the eigenvalue 4 has multiplicity
1.
One last bit of terminology: When we wish to talk about all the eigenvectors
associated with one eigenvalue we use the term eigenspace. Thus, in Example 2
the eigenspace of the eigenvalue (−1) is just ker(A − (−1)I). In general, if L is
any linear transformation from a vector space into itself and λ0 is an eigenvalue
of L, the eigenspace of λ0 is ker(L − λ0 I). That is, the eigenspace of λ0 consists
of all its eigenvectors plus the zero vector. Note that the zero vector is never an
eigenvector.
We’ve seen how to compute the eigenvalues of a linear transformation if the
linear transformation is matrix multiplication. What do we do in the more
178 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

abstract setting when L : V → V ? Well, in Chapter 3 we saw that once we


fix a basis F = {ff 1 , . . . , f n } of V , we have a matrix representation, A, of L.
Moreover, L(x x) = λxx for some nonzero vector x and scalar λ if and only if
A[xx ]F = λ[x
x ]F . Thus, λ is an eigenvalue of L if and only if det(A − λI) = 0, and
ker(A − λI) has a nonzero vector and the nonzero vectors in ker(A − λI) are the
coordinates, with respect to the basis F , of the nonzero vectors in ker(L − λI).
What this means is that we may calculate the eigenvalues and eigenvectors
of L by calculating the eigenvalues and eigenvectors of any one of its matrix
representations.

Problem Set 5.1


1. Compute the characteristic matrix and polynomial of each of the following
matrices:
 
    1 3 0
1 4 7 8
a. b. c. 1 2 1
3 8 0 4
4 −5 8
2. Calculate the characteristic polynomial of the following matrices:
   
2 −6 1 −1 3 2
a. 4 0 −5 b.  6 1 0
0 −1 0 4 5 0
3. Find the eigenvalues, their multiplicities, and the dimensions of the eigenspaces
of the following matrices:
       
3 0 3 1 3 1 3 0
a. b. c. d.
0 3 0 3 1 3 0 2
4. Find the eigenvalues, their multiplicities, and the dimensions of the eigenspaces
of the following matrices:
     
−1 0 0 −1 1 0 −1 1 0
a.  0 −1 0 b.  0 −1 0 c.  0 −1 1
0 0 −1 0 0 −1 0 0 −1
5. Let V be a vector space. Let L : V → V be a linear transformation. If
λ0 is an eigenvalue of L, show that the eigenspace of V corresponding to
λ0 is a subspace of V and has dimension at least 1. The eigenspace of λ0
x) = λ0x .
is defined to be the set of vectors x such that L(x
6. Let A and B be two similar matrices. Thus, there is a matrix P such that
A = P −1 BP .
a. Show that A and B have the same eigenvalues with the same multi-
plicities.
b. If W is the eigenspace of A corresponding to λ, what is the eigenspace
of B corresponding to λ?
5.1. WHAT IS AN EIGENVECTOR? 179

7. Let A be an n × n matrix. Let p(λ) = det(A − λIn ) be its characteristic


polynomial. Then if λ1 , λ2 , . . . , λn are the roots of p(λ), we may write

p(λ) = (−1)n (λ − λ1 )(λ − λ2 ) . . . (λ − λn )


= (−1)n [λn − c1 λn−1 + · · · + (−1)n cn ]

a. Assume A is a 2 × 2 matrix, A = [ajk ]. Then p(λ) = λ2 − c1 λ + c2 .


Show that c2 = det(A) = λ1 λ2 and c1 = λ1 + λ2 = a11 + a22 .
b. Assume A = [ajk ] is a 3 × 3 matrix. Then p(λ) = −(λ3 − c1 λ2 +
c2 λ − c3 ). Show that c1 = λ1 + λ2 + λ3 = a11 + a22 + a33 and that
c3 = det(A) = λ1 λ2 λ3 .
c. Generalize parts a and b to n × n matrices.
8. If A is an n × n matrix, A = [ajk ], we define the trace of A = Tr(A) =
a11 + a22 + · · · + ann . Show that the following are true for any two n × n
matrices A and B:
a. Tr(A + B) = Tr(A) + Tr(B)
b. Tr(AB) = Tr(BA)
c. Tr(A) = Tr(AT )
d. Show that if A and B are similar matrices, Tr(A) = Tr(B).
9. Let A be an n × n matrix and let p(λ) = det(A − λIn ), the characteristic
polynomial of A. Then if λ1 , λ2 , . . . , λn are the roots of p(λ), we may write

p(λ) = (−1)n (λ − λ1 )(λ − λ2 ) . . . (λ − λn )


= (−1)n [λn − c1 λn−1 + · · · + (−1)n cn ]

By p(A) we mean

p(A) = (−1)n [An − c1 An−1 + · · · + (−1)n cn In ]

That is, wherever λ appears in p(λ), it is replaced by A. For each of the


matrices in problem 1 compute p(A).
 
0 2
10. Let A = . Show that the eigenvalues of A are ±2. Let x and y be
2 0
two eigenvectors corresponding to 2 and −2, respectively.
a. Show that Anx = 2nx and Any = (−2)ny .
b. Let g(λ) = λn + c1 λn−1 + · · · + cn be an arbitrary polynomial. Define
the matrix g(A) by g(A) = An + c1 An−1 + · · · + cn I. Show that
g(A)xx = g(2)x
x and g(A)yy = g(−2)yy .
11. Let A be any n × n matrix. Suppose λ0 is an eigenvalue of A with x 0 any
eigenvector corresponding to λ0 . Let g(λ) be any polynomial in λ; define
g(A) as in problem 10. Show that g(A)x x 0 = g(λ0 )x
x0 .
180 CHAPTER 5. EIGENVALUES AND EIGENVECTORS
 
0 −1
12. Let A = . As we saw in Chapter 3, this matrix represents a
1 0
rotation of 90 degrees about the origin. As such, we should not expect
any eigenvectors. Why? Compute det(A − λI) and find all the roots of
this polynomial. Show that there is no vector x in R2 such that Ax
x = λx
x
except for the zero vector. What happens if R2 is replaced by C2 ? For
which rotations in R2 , if any, are there eigenvectors?
13. Find the eigenvalues and eigenspaces of the matrices below:
 4 4 
  3 0 2

0 0 −3 4 3 
 2 2 
−2 2 −2 4  
  3 0 1
3
 
−2 0 1 2 

 −1 −1 0 3

−3 0 −3 7
1 1
 
− − 0 0
3 3
 
d 0
14. Let A = 1 .
0 d2
a. Find the eigenvalues and eigenspaces of A.
b. Let p(λ) = det(A − λI). Show that p(A) = 022 , the zero 2 × 2 matrix;
cf. problem 10.
c. Assume that det(A) = d1 d2 6= 0. Use the equation p(A) = 022 to
compute A−1 , where p(λ) = det(A − λI).
15. Let A = [ajk ] be any 2 × 2 matrix. Let p(λ) be its characteristic polyno-
mial. Show that p(A) = 022 . Assume that det(A) 6= 0 and show how to
compute A−1 using this information.
16. Let A be any n × n matrix and let p(λ) be its characteristic polynomial.
The Hamilton–Cayley theorem states that p(A) = 0nn . Assuming that
det(A) 6= 0, explain how one could compute A−1 by using the equation
p(A) = 0nn .
 
6 1
17. Let A = .
0 9
a. Find the eigenvalues of A.
b. For any constant c, find the eigenvalues of the matrix A − cI.
c. For any constant c, find the eigenvalues of the matrix cA.
18. Let A be any n × n matrix. Suppose the eigenvalues of A are {λ1 , . . . , λn }.
Let c be any constant.
a. Show that the eigenvalues of A − cI are {λ1 − c, . . . , λn − c}.
b. What are the eigenvalues of cA?
5.1. WHAT IS AN EIGENVECTOR? 181

19. Let A = [ajk ] be an invertible n × n matrix. Suppose that {λ1 , . . . , λn }


is the set of eigenvalues of A. Show that λj 6= 0 for j = 1, . . . , n. Then
show that the eigenvalues of A−1 are the reciprocals of the λj . That is,
if 2 is an eigenvalue of A, then 12 is an eigenvalue of A−1 . How are the
eigenvectors of A and A−1 related?

20. Let A be an n × n matrix. Suppose x 0 is an eigenvector of A.

a. Show that x 0 is an eigenvector of An for any n.


b. What is the eigenvalue of An for which x 0 is the corresponding eigen-
vector?
c. Show that x 0 is an eigenvector of A − kI for any constant k.
d. What is the eigenvalue of A − kI for which x 0 is the corresponding
eigenvector?

21. Show that det(A−λI) = det(AT −λI) for any constant λ. Thus, A and AT
have the same characteristic polynomial, and hence the same eigenvalues.

22. Find a 2×2 matrix A for which A and AT do not have the same eigenspaces.

23. Find the eigenvalues, their multiplicities, and the dimensions of the cor-
responding eigenspaces for each of the following matrices:
     
1 1 0 0 1 1 0 0 1 1 0 0
0 1 0 0 0 1 1 0  0 1 1 0
a. 
0 0 1 0
 b. 
0 0 1 0
 c. 
 0 0 1 1

0 0 0 1 0 0 0 1 0 0 0 1
 
−1 3
24. Let A = . Define L : M22 → M22 by L[B] = AB for any B in
1 1
M22 . Find the eigenvalues of L, their multiplicities, and the dimensions
of the eigenspaces.

25. Let A be any n × n matrix. Define L : Mnn → Mnn by L[B] = AB. Show
that λ is an eigenvalue of L if and only if λ is an eigenvalue of the matrix
A. Remember that λ is an eigenvalue of A only if there exists a nonzero
x in Rn such that Ax x = λxx.
´t
26. Define L : P2 → P2 by L[pp](t) = (1/t) 0 p (s)ds. Find the eigenvalues,
their multiplicities, and the dimensions of the eigenspaces.
   
1 3 1 0
27. Let A = . Let B = . Define L : M22 → M22 by
0 1 0 6
L[xx] = Axx + x B. Show that L is a linear transformation, and then find
its eigenvalues, their multiplicities, and the corresponding eigenspaces.
182 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

5.2 Diagonalization of Matrices


In the last section we stated that the eigenvalues of a matrix A are those roots
of the characteristic polynomial p(λ) = det(A − λI) for which ker(A − λI)
contains more than just the zero vector. The fundamental theorem of algebra
states that every polynomial has at least one root. Thus, every matrix should
have at least one eigenvalue and a corresponding eigenvector. This argument
erroneously leads us to believe that every linear transformation L, from a vector
space V into V , has at least one eigenvector and eigenvalue. This need not be
true when V is a real vector space, since multiplication by complex numbers is
not allowed, and the root of p(λ) = 0 that is guaranteed by the fundamental
0 −1
theorem of algebra may be a complex number. The matrix is one such
1 0
example; cf. problem 12 in Section 5.1. The trouble with a complex number
being an eigenvalue is that our vector space may only allow multiplication by
real numbers. For example, if i represents the square root of −1 and x = (1, 2),
a vector in R2 , then ix
x = (i, 2i) makes sense, but it is no longer in R2 . If our
2
vector space is C , then, since multiplication by complex scalars is allowed, the
above problem does not arise.
Let’s step back and review what it is we want:
simple representations for linear transformations.
From this, we are led to the idea of eigenvectors and the realization that we
want not just one eigenvector, but a basis of eigenvectors. In fact, we have the
following theorem.
Theorem 5.2. Let L : V → V be a linear transformation of a vector space
V into itself. Suppose there is a basis of V that consists of eigenvectors of L.
Then the matrix representation of L with respect to this basis will be diagonal
and the diagonal elements will be the eigenvalues of L.
Proof. Let F = {ff 1 , . . . , f n } be a basis of V such that L(ff k ) = λk f k , i.e., f k is
an eigenvector of L and λk is the corresponding eigenvalue. Then if A = [ajk ]
is the matrix representation of L with respect to the basis F , we have
n
X
λkf k = L(ff k ) = ajk f j
j=1

Thus, ajk = 0 if j 6= k and akk = λk . In other words A is a diagonal matrix


whose diagonal elements are precisely the eigenvalues of L.
Notice, there is no problem with complex numbers. We avoid the difficulty by
assuming a basis of eigenvectors. The next question is, how can we tell if there
is such a basis? The following lemma helps answer this question.
Lemma 5.1. Let L : V → V be a linear transformation. Suppose λ1 , λ2 , . . . , λp
are distinct eigenvalues of L. Let f 1 , . . . , f p be eigenvectors corresponding to
them. Then the set of vectors {ff 1 , . . . , f p } is linearly independent.
5.2. DIAGONALIZATION OF MATRICES 183

Proof. We prove that this set is linearly independent by an inductive process;


that is, we show the theorem is true when p = 1 and 2 and then show how to go
from p − 1 to p. Suppose p = 1: then the set in question is just {ff 1 }, and since
f 1 6= 0, it is linearly independent. Now suppose there are constants c1 and c2
such that
0 = c1f 1 + c2f 2 (5.1)
Then we also have

0 = L(00) = L(c1f 1 + c2f 2 )


= c1 L(ff 1 ) + c2 L(ff 2 )
= c1 λ1f 1 + c2 λ2f 2 (5.2)

Multiplying (5.1) by λ1 , and subtracting the resulting equation from (5.2), we


have
0 = c2 (λ2 − λ1 )ff 2
Since f 2 6= 0 and λ2 6= λ1 , we must have c2 = 0. This and (5.1) imply c1 = 0.
Hence, {ff 1 , f 2 } is linearly independent. Assume now that the set {ff 1 , . . . , f p−1 }
is linearly independent and

0 = c1 f 1 + c2f 2 + · · · + cp−1 f p−1 + cp f p (5.3)

for some constants c1 , . . . , cp . Then

0 = L[c1f 1 + · · · + cpf p ]
= c1 L(ff 1 ) + · · · + cp L(ff p )
= c1 λ1f 1 + · · · + cp λpf p (5.4)

Multiplying (5.3) by λp , and then subtracting from (5.4), we have

0 = c1 (λ1 − λp )ff 1 + c2 (λ2 − λp )ff 2 + · · · + cp−1 (λp−1 − λp )ff p−1

Since the f k ’s, 1 ≤ k ≤ p − 1, are linearly independent we must have ck (λk −


λp ) = 0 for k = 1, 2, . . . , p − 1. But λp 6= λk ; hence ck = 0, k = 1, 2, . . . , p − 1.
Thus, (5.3) reduces to cpf p = 0 and we conclude that cp = 0 also. This of
course means that our set of vectors is linearly independent.
Theorem 5.3. Let V be a real n-dimensional vector space. Let L be a linear
transformation from V into V . Suppose that the characteristic polynomial of L
has n distinct real roots. Then L has a diagonal matrix representation.
Proof. Let the roots of the characteristic polynomial be λ1 , λ2 , . . . , λn . By hy-
pothesis they are all different and real. Thus, for each j, ker(L − λj I) will
contain more than just the zero vector. Hence, each of these roots is an eigen-
values, and if f 1 , . . . , f n is a set of associated eigenvectors, the previous lemma
ensures that they form a linearly independent set. Since dim(V ) = n, they also
form a basis for V . Theorem 5.2 guarantees that L does indeed have a matrix
representation that is diagonal.
184 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

Note that if V in the above theorem is an n-dimensional complex vector


space, we would not need to insist that the roots of the characteristic polynomial
be real.
Example 1. Let L be a linear transformation from R4 to R4 whose matrix
representation A with respect to the standard basis is
 
−3 0 2 −4
 −6 2 2 −5 
 
 4 0 −1 4 
6 0 −2 7

Find a diagonal representation for L.


Solution. The characteristic polynomial p(λ) of L is
 
−3 − λ 0 2 −4
 −6 2−λ 2 −5 
p(λ) = det  4

0 −1 − λ 4 
6 0 −2 7−λ
 
−3 − λ 2 −4
= (2 − λ) det  4 −1 − λ 4 
6 −2 7−λ
 
3−λ 0 3−λ
= (2 − λ) det  4 −1 − λ 4 
6 −2 7−λ
 
3−λ 0 0
= (2 − λ) det  4 −1 − λ 0 
6 −2 1−λ
= (−1 − λ)(1 − λ)(2 − λ)(3 − λ)

Thus, the roots of p(λ) are −1, 1, 2, and 3. Since they are real and distinct,
Theorem 5.3 guarantees that R4 will have a basis that consists of eigenvectors
of L. We next find one eigenvector for each of the eigenvalues.
λ = −1:  
−2 0 2 −4
 −6 3 2 −5 
A − (−1)I =   4 0

0 4 
6 0 −2 8
is row equivalent to  
1 0 0 1
 0 1 0 1 
 
 0 0 1 −1 
0 0 0 0
Thus, ker(A + I) = S[(−1, −1, 1, 1)], and f 1 = (−1, −1, 1, 1) is an eigenvector
corresponding to −1.
5.2. DIAGONALIZATION OF MATRICES 185

λ = 1:  
−4 0 2 −4
 −6 1 2 −5 
A−I =  4 0 −2

4 
6 0 −2 6
 
1 0 0 1
0 1 0 1
is row equivalent to the matrix 
0 0 1 0. Thus, f 2 = (−1, −1, 0, 1) is an

0 0 0 0
eigenvector corresponding to the eigenvalue 1.
λ = 2: The matrix A − 2I is row equivalent to the matrix
 
1 0 0 1
 0 0 1 −1 
 
 0 0 1 1 
0 0 0 0

Hence, f 3 = (0, 1, 0, 0) is an eigenvector for the eigenvalue 2.


λ = 3: The matrix A − 3I is row equivalent to the matrix
 1 
1 0 0
 2 
 
 0 1 0 1 
1
 
 
 0 0 1 − 
 2 
0 0 0 0

Hence, f 4 = (−1, −2, 1, 2) is an eigenvector corresponding to the eigenvalue


3. Lemma 5.1 guarantees that the four vectors {ff 1 , f 2 , f 3 , f 4 } are linearly
independent, and since dim(R4 ) = 4, they form a basis. Since L(ff 1 ) = −ff 1 ,
L(ff 2 ) = f 2 , L(ff 3 ) = 2ff 3 , and L(ff 4 ) = 3ff 4 , the matrix representation of L
with respect to this basis is
 
−1 0 0 0
 0 1 0 0 

 0 0 2 0 
 
0 0 0 3

It is clear from some of our previous examples that rather than having dis-
tinct eigenvalues it is possible that some eigenvalues will appear with multi-
plicity greater than 1. In this case, Theorem 5.3 is not applicable, and we use
Theorem 5.4, whose proof is omitted.
Theorem 5.4. Let V be an n-dimensional real vector space. Let L : V →
V be a linear transformation. Let p(λ), the characteristic polynomial of L,
equal (λ − λ1 )m1 (λ − λ2 )m2 . . . (λ − λp )mp . Assume each of the roots λj , 1 ≤
j ≤ p is real. Then L has a diagonal matrix representation if and only if
186 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

dim(ker(A − λj I)) = mj for each of the eigenvalues λj ; that is, the number of
linearly independent solutions to the homogeneous equation (A−λj I)x x = 0 must
equal the multiplicity of the eigenvalue λj .

We illustrate this theorem in the next example.

Example 2. Determine which of the following linear transformations has a


diagonal representation. The matrices that are given are the representations of
the transformations with respect to the standard basis of R3 .
 
3 1 0
a. A = 0 3 0 p(λ) = (3 − λ)2 (4 − λ)
0 0 4
The eigenvalues of A are 3, with multiplicity 2, and 4 with multiplicity 1. The
matrix A − 3I equals  
0 1 0
0 0 0
0 0 1
Clearly the kernel of this matrix has dimension 1. Thus, dim(ker(A − 3I))
equals 1, which is less than the multiplicity of the eigenvalue 3. Hence, the
linear transformation L, represented by A, cannot be diagonalized.
 
3 1 0
b. A = 1 3 0 p(λ) = −(λ − 4)2 (λ − 2)
0 0 4
The eigenvalue 4 has multiplicity 2 and the eigenvalue 2 has multiplicity 1.
Hence, this linear transformation will have a diagonal representation if and only
if dim(ker(A − 4I)) = 2 and dim(ker(A − 2I)) = 1. Since the last equality is
obvious, we will only check the first one. The matrix (A − 4I) is row equivalent
to the matrix  
1 −1 0
 0 0 0 
0 0 0
Since this matrix has rank equal to 1, its kernel must have dimension equal to 2.
In fact the eigenspace ker(A − 4I) has {(1, 1, 0), (0, 0, 1)} as a basis. A routine
calculation shows that the vector (1, −1, 0) is an eigenvector corresponding to
the eigenvalue 2. Thus, a basis of eigenvectors is {(1, 1, 0), (0, 0, 1), (1, −1, 0)},
and L with respect to this basis has the diagonal representation
 
4 0 0
0 4 0 
0 0 2

This last example is a special case of a very important and useful result.
Before stating it, we remind the reader that a matrix A is said to be symmetric
if A = AT ; cf. Example 2b.
5.2. DIAGONALIZATION OF MATRICES 187

Theorem 5.5. Let L be a linear transformation from Rn into Rn . Let A be


the matrix representation of L with respect to the standard basis of Rn . If A is
a symmetric matrix, then L has a diagonal representation.
We omit the proof of this result and content ourselves with one more example.
Example 3. Let L be a linear transformation from R3 to R3 whose matrix
representation A with respect to the standard basis is given below. Find the
eigenvalues of L and a basis of eigenvectors.
 
1 3 −3
A= 3 1 −3 
−3 −3 1

Solution. We note that A is symmetric and hence Theorem 5.5 guarantees that
there is a basis of R3 that consists of eigenvectors of A. The characteristic
polynomial of A, p(λ), equals (2 + λ)2 (7 − λ). Thus, L has two real eigenvalues
−2 and 7; −2 has multiplicity 2 and 7 has multiplicity 1. Computing the
eigenvectors we have:
λ = −2 : A + 2I is row equivalent to the matrix
 
1 1 −1
 0 0 0 
0 0 0

Two linearly independent eigenvectors corresponding to −2 are (1,0,1) and


(0,1,1).
λ = 7 : A − 7I is row equivalent to
 
1 0 1
0 1 1
0 0 0

Thus, (1, 1, −1) is a basis for ker(A− 7I) and the set {(1, 1, −1), (1,0,1) (0, 1, 1)}
is a basis for R3 . Clearly the matrix representation of L with respect to this
basis is  
7 0 0
 0 −2 0  
0 0 −2
There is one more idea to discuss before we conclude this section on diago-
nalization. Suppose we start out with a matrix A (the representation of L with
respect to the standard basis of Rn ), and after calculating the eigenvalues and
eigenvectors we see that this particular matrix has n linearly independent eigen-
vectors; i.e., Rn has a basis of eigenvectors of A. Suppose the eigenvalues and
eigenvectors are {λj : j = 1, . . . , n} and F = {ff j : j = 1, . . . , n} respectively.
Then the matrix representation of L with respect to the basis F is [λj , δjk ] = D.
How are A and D related? The answer to this has already been given in Sec-
tion 3.5; for if P is the change of basis matrix that satisfies [x x]TS = P [xx]TF , then
188 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

P is easy to write out, for its columns are the coordinates of the eigenvectors
with respect to the standard basis. Moreover, we have
A = P DP −1 (5.5)

or

D = P −1 AP
Note that in the formula D = P −1 AP , the matrix that multiplies A on the right
is the matrix whose columns consist of the eigenvectors of A. Note also that A
and D are similar; cf. Section 3.5.
Example 4. Let A be the matrix
 
1 3 −3
 3 1 −3 
−3 −3 1
Since A is symmetric, we know that it can be diagonalized. In fact, in Example 3,
we computed the eigenvalues and eigenvectors of A and got
λ1 = −2 f 1 = (1, 0, 1
λ2 = −2 f 2 = (0, 1, 1)
λ3 = 7 f 3 = (1, 1, −1)
Thus,    
−2 0 0 1 0 1
D =  0 −2 0  and P =  0 1 1 
0 0 7 1 1 −1
Computing P −1 we have
D = P −1 AP
   
2 −1 1 1 3 −3 1 0 1
1
= −1 2 1  3 1 −3   0 1 1 
3
1 1 −1 −3 −3 1 1 1 −1
 
−2 0 0
=  0 −2 0  
0 0 7
Formula (5.5) and the following calculations enable us to compute fairly
easily the various powers of A, once we know the eigenvalues and eigenvectors
of this matrix.
A2 = [P DP −1 ][P DP −1 ] = P D(P −1 P )DP −1 = P D2 P −1
In a similar fashion, we have
An = P Dn P −1 (5.6)
The advantage in using (5.6) to compute An is that D is a diagonal matrix and
its powers are easy to calculate.
5.2. DIAGONALIZATION OF MATRICES 189
 
1 3
Example 5. Let A = ; cf. Example 4 in Section 1.4. Compute An for
0 4
arbitrary n.
Solution. Computing the characteristic polynomial of A we get
 
1−λ 3
p(λ) = det = (λ − 1)(λ − 4)
0 4−λ

Computing the eigenvectors of A, we have


 
0 3
A−I = thus f 1 = (1, 0)
0 3
 
−3 3
A − 4I = thus f 2 = (1, 1)
0 0

A basis of eigenvectors is {(1, 0), (1, 1)} and


     
1 0 1 −1 1 1
D= P −1 = P =
0 4 0 1 0 1

Using (5.6) we calculate An :


 n   n  
1 3 1 1 1 0 1 −1
=
0 4 0 1 0 4 0 1
   
1 1 1 0 1 −1
=
0 1 0 4n 0 1
n
1 4n − 1
     
1 4 1 −1
= = 
0 4n 0 1 0 4n

Problem Set 5.2


1. For each of the following linear transformations find a basis in which the
matrix representation of L is diagonal:

a. L(x1 , x2 ) = (−x1 , 2x1 + 3x2 )


b. L(x1 , x2 ) = (8x1 + 2x2 , x1 + 7x2 )

2. For each of the following matrices find a diagonal matrix that is similar
to the given matrix:
       
−1 0 8 2 4 2 1 2
a. b. c. d.
2 3 1 7 2 1 0 0

3. For each of the matrices A in problem 2 compute An for n an arbitrary


positive integer. If A is invertible, compute An for n an arbitrary integer.
[If A is invertible, (5.6) is valid for n = 0, ±1, ±2, . . . .]
190 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

4. Let A be the matrix


 
4 1 −1
A= 1 4 −1 
−1 −1 4

a. Find a matrix P such that P −1 AP = D is a diagonal matrix.


b. Compute A10 .
c. Compute A−10 .

5. Let A be the matrix  


3 0 1 −2
 2 3 2 −4 
A=
 0

0 2 0 
1 0 1 0

a. Compute the eigenvalues and eigenvectors of A.


b. Find a matrix P such that P −1 AP is a diagonal matrix.
c. Compute A3 .

6. Let L : V → W be a linear transformation. Suppose dim(V ) = n and


dim(W ) = m. Show that it is possible to pick bases in V and W such
that the matrix representation of L with respect to these bases is an m× n
matrix A = [ajk ] with ajk = 0 if j is not equal to k and akk equals zero
or one. Moreover, the number of ones that occur in A is exactly equal to
dim(Rg(L)) = rank(A).

7. Let Ap = [apjk ] be a sequence of n × n matrices. We say that lim Ap =


p→∞
A = [ajk ], if and only if lim apjk = ajk for each j and k. For each of the
p→∞
matrices in problem 2, determine whether or not lim An exists, and then
n→∞
evaluate the limit if possible.

8. For the matrix A in problem 5, lim An =?


n→∞
 
λ1 0 Pp
9. Let A = . Let g(λ) = k=0 gk λk be any polynomial in λ. Define
0 λ2
g(A) by g(A) = pk=0 gk Ak . Show that
P

 
g(λ1 ) 0
g(A) =
0 g(λ2 )

Generalize this formula to the case when A is an n × n diagonal matrix.

10. Suppose A and B are similar matrices. Thus, A = P BP −1 for some


nonsingular matrix P . Let g(λ) be any polynomial in λ. Show that
g(A) = P g(B)P −1 .
5.3. APPLICATIONS 191

11. If c is a nonnegative real number, we can compute its square root. In fact
cα is defined for any real number α if c > 0. Suppose A is a diagonalizable
matrix with nonnegative eigenvalues. Thus A = P DP −1 , where D equals
[λj δjk ]. Define Aα = P Dα P −1 , where Dα = [λα
j δjk ].

a. Show that (D1/2 )2 = D, assuming λj ≥ 0.


b. Show that (A1/2 )2 = A, with the same assumption as in a.
12. For each matrix A below compute A1/2 .
 
    4 1 −1
8 1 4 2
a. b. c.  1 4 −1 
1 7 2 1
−1 −1 4

13. For each matrix in the preceding problem compute when possible A−1/6
and A2/3 .

14. Compute A1/2 and A−1/2 where A is the matrix in problem 5. Verify that
A1/2 A−1/2 = I.

5.3 Applications
In this section, instead of presenting any new material we discuss and analyze
a few problems by employing the techniques of the preceding sections. Several
things should be observed. One is the recasting of the problem into the lan-
guage of matrices, and the other is the method of analyzing matrices via their
eigenvalues and eigenvectors.
The reader should also be aware that these problems are contrived, in the
sense that they do not model (to the author’s knowledge) any real phenomenon.
They were made up so that the arithmetic would not be too tedious and yet the
flavor of certain types of analyses would be allowed to seep through. Modeling
real world problems mathematically usually leads to a lengthy analysis of the
physical situation. This is done so that the mathematical model is believable;
that is, we wish to analyze a mathematical structure, for us a matrix, and then
be able, from this analysis, to infer something about the original problem. In
most cases this just takes too much time—not the analysis, but making the
model believable.
Example 1. An individual has some money that is to be invested in three
different accounts. The first, second, and third investments realize a profit of 8,
10, and 12 percent per year, respectively. Suppose our investor decides that at
the end of each year, one-fourth of the money earned in the second investment
and three-fourths of that earned in the third investment should be transferred
into the first account, and that one-fourth of the third account’s earnings will
be transferred to the second account. Assuming that each account starts out
with the same amount, how long will it take for the money in the first account
to double?
192 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

Solution. The first step in analyzing this problem is to write down any equations
relating the unknowns. Thus let ajk , j = 1, 2, 3; k = 1, 2, . . . represent the
amount invested in account j during the kth year. Thus, if a dollars were
originally invested in each of the three accounts we have a11 = a21 = a31 = a.
Moreover, we also have

a1(k+1) = 1.08a1k + 41 (0.1a2k ) + 34 (0.12a3k )


a2(k+1) = a2k + 43 (0.1a2k ) + 41 (0.12a3k ) (5.7)
a3(k+1) = a3k

Rewriting this as a matrix equation, where Uk = [a1k , a2k , a3k ]T , we have


 
1.08 0.025 0.09
Uk+1 =  0.0 1.075 0.03  Uk (5.8)
0.0 0.0 1.0

Setting A equal to the 3 × 3 matrix in (5.8) we have

Uk = AUk−1 = A2 Uk−2 = Ak−1 U1 (5.9)

Our problem is to determine k such that a1k = 2a. Clearly the eigenvalues of A
are 1.08, 1.075, and 1. Routine calculations give us the following eigenvectors:

λ1 = 1.08 f 1 = (1, 0, 0)
λ2 = 1.075 f 2 = (−5, 1, 0)
λ3 = 1 f 3 = (−5, −2, 5)

1 5 3
   
1 −5 −5
2
Setting P =  0 1 −2 , we have P −1 = 0 1 5 . Thus, A =
0 0 5 1
0 0 5
 
1.08 0 0
P 0 1.075 0 P −1 and
0 0 1

(1.08)k 5[(1.08)k − (1.075)k ] 3(1.08)k − 2(1.075)k − 1


 
k 2
A =  0 (1.075)k k
5 [(1.075) − 1]

0 0 1

a1k , which is the first component of the vector Ak−1 U1 , must equal

a1k = [9(1.08)k−1 − 7(1.075)k−1 − 1]a (5.10)

where U1 = (a, a, a)T .


If a1k equals 2a, we then have

9(1.08)k−1 − 7(1.075)k−1 − 1 = 2 (5.11)


5.3. APPLICATIONS 193

Equations such as (5.11) are extremely difficult to solve; so we content ourselves


with the following approximation:

(1.08)k−1 = (1.075 + 0.005)k−1


= (1.075)k−1 + ℓ

If k is not too large, ℓ will be approximately equal to (0.005)(k − 1), a rela-


tively small number. In any case, since ℓ > 0, we certainly have (1.08)k−1 ≥
(1.075)k−1 . Therefore, if we find k such that

9(1.075)k−1 − 7(1.075)k−1 − 1 = 2 (5.12)

then certainly (5.11) will hold with the equality sign replaced by ≥. Now (5.12)
implies (1.075)k−1 = 23 . Thus k − 1 = (ln 3 − ln 2)/ ln 1.075 ≈ 5.607 or k ≈ 6.607
years. In other words the amount of money in the first account will certainly
have doubled after 7 years. 
This example also illustrates another aspect of mathematical analysis, ap-
proximations. Equation (5.11) is extremely difficult to solve, while (5.12) is
much easier and more importantly supplies us with a usable solution.
Example 2. Suppose we have two different species and their populations dur-
ing the kth year are represented by xk and yk . Suppose that left alone the
populations grow according to the following equations:

xk+1 = 3xk − yk
yk+1 = −xk + 2yk

Our problem is the following. What percentage of each species can be removed
(harvested) each year so that the populations remain constant?
Solution. Let r1 and r2 denote the fractions of xk and yk that are removed. We
have 0 < r1 , r2 < 1, and at the end of each year we effectively have (1 − r1 )xk
and (1 − r2 )yk members left in each species. These comments imply
    
xk+1 3(1 − r1 ) −(1 − r2 ) xk
=
yk+1 −(1 − r1 ) 2(1 − r2 ) yk

Since we want xk+1 = xk and yk+1 = yk , we need r1 and r2 such that


   
3(1 − r1 ) −(1 − r2 ) 1 0
=
−(1 − r1 ) 2(1 − r2 ) 0 1

Clearly this is impossible. Hence, we cannot remove any percentage and leave
the populations fixed. Well, maybe we asked for too much. Perhaps instead of
wanting our removal scheme to work for any population, we should instead try
to find a population for which there is some removal scheme. Thus, if A is the
matrix  
3(1 − r1 ) −(1 − r2 )
−(1 − r1 ) 2(1 − r2 )
194 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

 T  T
we want to find numbers x and y such that A x y = x y . In other
words, can we pick r1 and r2 so that 1 not only is an eigenvalue for A but also
has an eigenvector with positive components. Let’s compute the characteristic
polynomial of A.
 
3(1 − r1 ) − λ −(1 − r2 )
p(λ; r1 , r2 ) = det
−(1 − r1 ) 2(1 − r2 ) − λ
= [λ − 3(1 − r1 )][λ − 2(1 − r2 )] − (1 − r1 )(1 − r2 )
= λ2 − (5 − 3r1 − 2r2 )λ + 5(1 − r1 )(1 − r2 )

Now we want p(1; r1 , r2 ) = 0. Setting λ = 1, we have after some simplification

r1 (5r2 − 2) = 3r2 − 1 (5.13)

where we want rk , k = 1 or 2, to lie between 0 and 1. Solving (5.13) for r1 and


checking the various possibilities to ensure that both rk ’s lie between 0 and 1,
we have
3r2 − 1 1 1
r1 = 0 < r2 < or < r2 < 1 (5.14)
5r2 − 2 3 2
For r1 and r2 so related let’s calculate the eigenvectors of the eigenvalue 1. The
matrix  
2 − 3r1 −(1 − r2 )
A − I2 =
−(1 − r1 ) 1 − 2r2

is row equivalent to the matrix


 1 
1
 5r2 − 2 
0 0

Thus, the eigenvector (x, y) corresponding to λ = 1 satisfies x = (2 − 5r2 )y. If


both x and y are to be positive we clearly need 2 − 5r2 positive. We therefore
have the following solution to our original problem. First r1 = (3r2 −1)/(5r2 −2),
where r2 satisfies the inequalities in (5.14). To ensure that x and y are positive
we then restrict r2 to satisfy 0 < r2 < 13 only. In conclusion, if we harvest less
than 31 of the second species and (3r2 − 1)/(5r2 − 2) of the first species where
the two species are in the ratio 1/(2 − 5r2 ), we will have a constant population
from one year to the next. 

Example 3. This last example is interesting in that it is not all clear how to
express the problem in terms of matrices. Suppose we construct the following
sequence of numbers: let a0 = 0 and a1 = 1, a2 = 12 , a3 = (1 + 12 )/2 = 43 ,
a4 = ( 21 + 43 )/2 = 58 , and in general an+2 = (an + an+1 )/2; that is, an+2 is
the average of the two preceding numbers. Do these numbers approach some
constant as n gets larger and larger?
5.3. APPLICATIONS 195

 T
Solution. Let Un = an−1 an for n = 1, 2, . . . . Recalling how the an are
defined, we have
 
  an
an
Un+1 = =  an+1 an 
an+1 +
2 2
0 1 
 

an−1
= 1 1  = AUn (5.15)
an
2 2
where A is the 2 × 2 matrix appearing in (5.15). As in similar examples, we
T
have Un = An−1 U1 , where U1 equals 0 1 . The characteristic polynomial of


A is
−λ 1
 

p(λ) = det(A − λI) = det  1  = 1 (2λ + 1)(λ − 1)


1 2
−λ
2 2
Thus, the eigenvalues of A are 1 and − 21 . We will see later that, if the an have
a limiting value, it is necessary for the number 1 to be an eigenvalue.

λ1 = 1, f 1 = (1, 1)
1
λ = − , f 2 = (−2, 1)
2
   
1 −2 1 2
Setting P equal to , we calculate P −1 = 31 . Thus, we
1 1 −1 1
have
  1 2 
  1 0
1 −2  n  3 3 
An =

1 
1 1 0 − 1 1

2 −
3 3
Clearly, the larger n gets
 the closer the middle matrix on the right-hand side
1 0
gets to the matrix which means that the vector Un = An−1 U1 gets close
0 0
to the vector U∞ , where
 1 2 
    
1 −2 1 0  3 3  0
U∞ =
1 1 0 0  1 1  1

3 3
1 2 2
 
3 3 0 3
= = 
1 2 1 2

3 3 3
 T  2 2 T
Thus, Un = an−1 an gets close to 3 3 , which means that the numbers
an get close to the number 32 . Notice too that U∞ is an eigenvector of A
196 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

corresponding to the eigenvalue 1. In fact, this was to be expected from the


equation Un+1 = AUn ; for if the Un converge to something nonzero, called
U∞ , then U∞ must satisfy the equation U∞ = AU∞ . That is, U∞ must be an
eigenvector of the matrix A corresponding to the eigenvalue 1. This is why 1
must be an eigenvalue. 

Problem Set 5.3


1. Suppose we have a single species that increases from one year to the next
according to the rule
xk+1 = 2xk
What percentage of the population can be harvested and have the popu-
lation remain constant from one year to the next? If the initial population
equals 10, assuming no harvesting, what will the population be in 20 years?
2. The following sequence of numbers is called the Fibonacci sequence:

a0 = 1, a1 = 1, a2 = 2, a3 = 3, . . . , an+1 = an + an−1

Find a general formula for an and determine its limit if one exists.
3. Define the following sequence of numbers:
1 2 1 2
a0 = a a1 = b a2 = a0 + a1 and an+2 = an + an+1
3 3 3 3
Find a general formula for an and determine its limiting behavior.
4. Define the following sequence of numbers:

a0 = a a1 = b a2 = ca0 + da1 an+2 = can + dan+1

a. If c and d are nonnegative and c + d = 1, determine a formula for an


and the limiting behavior of this sequence.
b. What happens if we just assume that c + d = 1?
c. What happens if there are no restrictions on c and d?
5. Suppose there are two cities, the sum of whose populations remains con-
stant. Assume that each year a certain fraction of one city’s population
moves to the second city and the same fraction of the second city’s popu-
lation moves to the first city. Let xk and yk denote each city’s population
in the kth year.
a. Find formulas for xk+1 and yk+1 in terms of xk and yk . These for-
mulas will of course depend on the fraction r of the populations that
move.
b. What is the limiting population of each city in terms of the original
populations and r?
5.3. APPLICATIONS 197

6. We again have the same two cities as in problem 5, only this time let’s
assume that r1 represents the fraction of the first city’s population that
moves to the second city and r2 the fraction of the second city’s population
that moves to the first city.
a. Find formulas for xk+1 and yk+1 .
b. What is the limiting population of each city?
7. Suppose that the populations of two species change from one year to the
next according to the equations

xk+1 = 4xk + 3yk


yk+1 = 3xk + 9yk

Are there any initial populations and harvesting schemes that leave the
populations constant from one year to the next?
8. Let u0 = a > 0, u1 = b > 0. Define u2 = u0 u1 , un+2 = un un+1 . What is
lim un ? Hint: log un =?

Supplementary Problems
1. Define and give examples of the following:
a. Eigenvector and eigenvalue
b. Eigenspace
c. Characteristic matrix and polynomial
2. Find a linear transformation from R3 to R3 such that 4 is an eigenvalue
of multiplicity 2 and 5 is an eigenvalue of multiplicity 1.
3. Let L be a linear transformation from V to V . Let λ be any eigenvalue of
L, and let Kλ be the eigenspace of λ. That is, Kλ = {xx : Lx
x = λx
x }. Show
that Kλ is a subspace of V , and that it is invariant under L, cf. number
13 in the Supplementary Problems for Chapter 3.
4. Let λ be an eigenvector of L. A nonzero vector x is said to be a generalized
eigenvector of L, if there is a positive integer k such that

[L − λI]k x = 0

Show that the set of all generalized eigenvectors of λ along with the zero
vector is an invariant subspace.
5. Let  
2 1 0
A = 0 2 1
0 0 2
198 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

a. Find all the eigenvalues of A.


b. Find the eigenspaces of A.
c. Find the generalized eigenspaces of A; cf. problem 4.

6. Let  
1 2 −4
A= 0 −1 6 
0 −1 4

a. Determine the eigenvalues and eigenspaces of A.


b. Show that A is not similar to a diagonal matrix.
c. Find the generalized eigenspaces of A.

7. Let L be a linear transformation from R2 to R2 . Let x be a vector in R2


for which x and Lxx are not zero, but for which L2x = 0 .

x, x } is linearly independent.
a. Show that F = {Lx
b. Show that the matrix
 representation
 A of L with respect to the basis
0 1
F of part a equals .
0 0
c. Deduce that L2y = 0 for every vector y in R2 .

8. Suppose that L is a linear transformation from R3 to R3 , and there is a


vector x for which L2x = 0 , but x , Lx
x, and L2x are not equal to the zero
2 x, x }.
vector. Set F = {L x , Lx

a. Show that F is a basis of R3 .


b. Find the matrix representation A of L with respect to the basis F .
c. Show that L3y = 0 for every vector y in R3 .

9. Let  
1 0 0 2
0 1 3 0
A=
0

0 2 0
0 0 0 2
Find a matrix P such that P −1 AP is a diagonal matrix. Compute A10 .

10. A mapping T from a vector space V into V is called affine if T x = Lx x + a,


where L is a linear transformation and a is a fixed vector in V .
Pn−1
a. Show that T nx = Lnx + k=0 Lka . Thus every power of T is also
an affine transformation.
   
2 3 1
b. Let A = . Define T x = Axx+ . If x = (1, 1), compute T 10x .
3 2 3
Hint: 1 + r + · · · + rn = [1 − rn+1 ][1 − r]−1 .
5.3. APPLICATIONS 199

P2
11. Let V = { n=0 an cos nt : an any real number}.
a. Show that {1, cos t, cos 2t} is a basis of V .
P2 P2
b. Define L : V → V by L( n=0 an cos nt) = n=0 −n2 an cos nt. Find
the eigenvalues and eigenvectors of L. Note that L[f ] = −f ′′ .
c. Find L1/2 , i.e., find a linear transformation T such that T 2 = L.
d. For any positive integers p and q find Lp/q .
12. Consider the following set of directions. Start at the origin, facing toward
the positive x1 axis, turn 45 degrees counterclockwise,
√ and walk 1 unit in

this direction. Thus, you will be at the point (1/ 2, 1/ 2). Then turn 45
degrees counterclockwise and walk 12 unit in this new direction; then turn
45 degrees counterclockwise and walk 14 unit in this direction. If a cycle
consists of a 45-degree counterclockwise rotation plus a walk that is half
as long as the previous walk, where will you be after 20 cycles?
13. Let M be the set of 2 × 2 matrices A for which
   
1 1
A =λ
0 0

for some real number λ. That is, A is in M if (1,0) is an eigenvector of A.


a. Show that M is a subspace of M22 .
b. Find a basis for M .
14. Let x be a fixed nonzero vector in Rn . Let Mx be the set of n × n matrices
for which x is an eigenvector.
a. Show that Mx is a subspace of Mnn .
b. If x = e 1 , find a basis for Mx .
c. Can you find a basis for Mx if x is arbitrary? Hint: If B is an
x1 = x 2 , then BAB −1 is in Mx2
invertible n × n matrix for which Bx
if A is in Mx 1 .
200 CHAPTER 5. EIGENVALUES AND EIGENVECTORS

You might also like