4.eigenvalues and Eigenvectors and Diagonalization
4.eigenvalues and Eigenvectors and Diagonalization
4.eigenvalues and Eigenvectors and Diagonalization
Definition 4.1. [Definitions of Eigenvalue and Eigenvector]: Let 𝐴 be an 𝑛 × 𝑛 matrix. The scalar 𝜆 is
called an eigenvalue of 𝐴 if there is a nonzero vector x such that
𝐴x = 𝜆x.
Figure 4.1:
REMARK:
• Note that an eigenvector cannot be zero. Allowing x to be the zero vector would render the definition
meaningless, because 𝐴0 = 𝜆0 is true for all real values of 𝜆. An eigenvalue of 𝜆 = 0, however, is possible.
• The eigenvalues of a real square matrix may be all real, both real and complex, or all complex.
Let 𝐴 be an 𝑛 × 𝑛 matrix.
1. An eigenvalue of 𝐴 is a scalar 𝜆 such that
det(𝜆𝐼 − 𝐴) = 0
det(𝜆𝐼 − 𝐴)x = 0
28
4 Eigenvalues and Eigenvectors
Definition 4.2. The equation det(𝜆𝐼 − 𝐴) = 0 is called the characteristic equation of 𝐴. Moreover, when
expanded to polynomial form, the polynomial
REMARK:
• This definition tells you that the eigenvalues of an 𝑛×𝑛 matrix correspond to the roots of the characteristic
polynomial of 𝐴. Because the characteristic polynomial of 𝐴 is of degree 𝑛, 𝐴 can have at most 𝑛 distinct
eigenvalues.Note that the Fundamental Theorem of Algebra states that an 𝑛th-degree polynomial has
precisely 𝑛 roots. These 𝑛 roots, however, include both repeated and complex roots.
• If an eigenvalue 𝛼 occurs as a multiple root (𝑘 times) of the characteristic polynomial, then 𝛼 has
multiplicity 𝑘. This implies that (𝜆 − 𝛼)𝑘 is a factor of the characteristic polynomial but (𝜆 − 𝛼)𝑘+1 is
not a factor of the characteristic polynomial.
If 𝐴 is an 𝑛 × 𝑛 triangular matrix, then its eigenvalues are the entries on its main diagonal.
If 𝐴 is an 𝑛 × 𝑛 matrix with an eigenvalue 𝜆 then the set of all eigenvectors of 𝜆 together with the zero
vector
{0} ∪ {x : x is an eigenvector of 𝜆}
is a subspace of R𝑛 . This subspace is called the eigenspace of 𝐴.
29
4 Eigenvalues and Eigenvectors
𝑇 (x) = 𝜆x
for some scalar 𝜆. The scalar 𝜆 is called an eigenvalue of 𝑇 , and x is said to be an eigenvector corresponding
to 𝜆.
1 3 0
Example 4.2. Consider a linear transformation 𝑇 : R3 R3 such that 𝑇 (v) = 𝐴v where 𝐴 = 3 1 0 .
→
0 0 −2
1 4 1
Note that, if v = 1 then 𝑇 (v) = 𝐴v = 4 = 4 1 . Then 4 is an eignevalue of 𝑇 and v is the corresponding
0 0 0
eigenvector.
Then,
𝑝 (𝐴) = 𝐴𝑛 + 𝑐𝑛−1𝐴𝑛−1 + · · · + 𝑐 1𝐴 + 𝑐 0 𝐼 = 𝑂
where 𝑂 is an 𝑛 × 𝑛 null(zero) matrix.
2 −2
𝐴= .
−2 −1
NOTE:
• If 𝐴 is nonsingular and 𝑝 (𝐴) = 𝐴𝑛 + 𝑐𝑛−1𝐴𝑛−1 + · · · + 𝑐 1𝐴 + 𝑐 0 𝐼 = 𝑂 then
1
𝐴−1 = (−𝐴𝑛−1 − 𝑐𝑛−1𝐴𝑛−2 − · · · − 𝑐 2𝐴 − 𝑐 1 𝐼 ).
𝑐0
• The Cayley-Hamilton Theorem can be used to calculate powers of the square matrix 𝐴.
𝐴2 = 2𝐴 + 𝐼
𝐴3 = 2(2𝐴 + 𝐼 ) + 𝐴
..
.
Definition 4.4 – Minimal polynomial. The minimal polynomial of a square matrix 𝐴, 𝜇𝐴 , is the unique
monic polynomiala 𝑝 of smallest degree such that 𝑝 (𝐴) = 𝑂.
aA monic polynomial is a polynomial whose highest-degree coefficient equals 1.
30
4 Eigenvalues and Eigenvectors
Theorem 4.5
Let 𝐴 be an 𝑛 × 𝑛 matrix. Then the characteristic polynomial of 𝐴 is a polynomial multiple of the minimal
polynomial of 𝐴.
Theorem 4.6
Let 𝐴 be an 𝑛 × 𝑛 matrix. Then the roots of the minimal polynomial of 𝐴 are precisely the eigenvalues of
𝐴.
0 0 0 0 −3
1 0 0 0 6
𝐴 = 0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
Definition 4.5. Let 𝐴 be an 𝑛 × 𝑛 square matrix and x be an 𝑛 × 1 vector. Then the quadratic form x𝑇 𝐴x is
(a) positive definite if x𝑇 𝐴x > 0 for x ≠ 0;
(b) negative definite if x𝑇 𝐴x < 0 for x ≠ 0;
(c) indefinite if x𝑇 𝐴x has both positive and negative values
Theorem 4.7
R E M A R K : The quadratic form for which x𝑇 𝐴x ≥ 0 if x ≠ 0 is called positive semidefinite, and one for
which x𝑇 𝐴x ≤ 0 if x ≠ 0 is called negative semidefinite. Every positive definite form is positive semidefinite,
but the converse is not true always.
31
4 Eigenvalues and Eigenvectors
Theorem 4.8
Definition 4.6 – Spectral Radius. The spectral radius 𝜌 (𝐴) of a square matrix 𝐴 is the largest absolute
value of any eigenvalue of 𝐴:
4.2 Diagonalization
If 𝐴 and 𝐵 are similara 𝑛 × 𝑛 matrices, then they have the same eigenvalues.
a For square matrices 𝐴 and 𝐵 of order 𝑛, 𝐵 is said to be similar to 𝐴 if there exists an invertible matrix 𝑃 such that 𝐵 = 𝑃 −1𝐴𝑃.
1 0 0
Example 4.6. (a) The matrix 𝐴 = −1 1 1 is diagonalizable there exists an invertible matrix 𝑃 =
−1 −2 4
1 0 0
1 1 1 such that
1 1 2
1 0 0
𝐵 = 𝑃 𝐴𝑃 = 0 2 0 .
−1
0 0 3
(b) Note that 𝐴 and 𝐵 are similar since there exists an invertible matrix 𝑃 such that 𝐵 = 𝑃 −1𝐴𝑃. Then by
Theorem 4.9 𝐴 and 𝐵 have the same eigenvalues. Since the eigenvalues of 𝐵 are 𝜆1 = 1, 𝜆2 = 2 and 𝜆3 = 3,
the eigenvalues of 𝐴 are 1, 2, 3.
32
4 Eigenvalues and Eigenvectors
Let 𝐴 be an 𝑛 × 𝑛 matrix.
(i) Find 𝑛 linearly independent eigenvectors p1, p2, . . . , p𝑛 for 𝐴 with corresponding eigenvalues 𝜆1, 𝜆2, . . . , 𝜆𝑛 .
If 𝑛 linearly independent eigenvectors do not exist, then 𝐴 is not diagonalizable.
(ii) If 𝐴 has 𝑛 linearly independent eigenvectors, let 𝑃 be the 𝑛 × 𝑛 matrix whose columns consist of these
eigenvectors. That is,
. . .
𝑃 = [p .. p .. . . . .. p ]
1 2 𝑛
(iii) The diagonal matrix 𝐷 = 𝑃 −1𝐴𝑃 will have the eigenvalues 𝜆1, 𝜆2, . . . , 𝜆𝑛 on its main diagonal (and zeros
elsewhere). Note that the order of the eigenvectors used to form 𝑃 will determine the order in which the
eigenvalues appear on the main diagonal of 𝐷.
Example 4.7. Determine whether the following matrices are diagonalizable. If the matrix is diagonalizable,
then find a matrix 𝑃 such that 𝑃 −1𝐴𝑃 is diagonal.
1 2
(a) 𝐴 =
0 1
1 −1 −1
(b) 𝐴 = 1 3 1
−3 1 −1
If 𝑛 × 𝑛 matrix 𝐴 has 𝑛 distinct eigenvalues, then the corresponding eigenvectors are linearly independent
and 𝐴 is diagonalizable.
R E M A R K : This condition is sufficient but not necessary for diagonalization. That is, a diagonalizable
matrix need not have distinct eigenvalues.
𝑇 (𝑥 1, 𝑥 2, 𝑥 3 ) = (𝑥 1 − 𝑥 2 − 𝑥 3, 𝑥 1 + 3𝑥 2 + 𝑥 3, −3𝑥 1 + 𝑥 2 − 𝑥 3 ).
If possible, find a basis 𝐵 for R3 such that the matrix for 𝑇 relative to 𝐵 is diagonal.
Let 𝐴 be an 𝑛 × 𝑛 matrix. Then 𝐴 is orthogonally diagonalizable and has real eigenvalues if and only if 𝐴
is symmetric.
33
4 Eigenvalues and Eigenvectors
Let 𝐴 be an 𝑛 × 𝑛 matrix.
(i) Find all eigenvalues of 𝐴 and determine the multiplicity of each.
(ii) For each eigenvalue of multiplicity 1, choose a unit eigenvector. (Choose any eigenvector and then
normalize it.)
(iii) For each eigenvalue of multiplicity 𝑘 ≥ 2, find a set of 𝑘 linearly independent eigenvectors. If this set is
not orthonormal, apply the Gram-Schmidt orthonormalization process.
(iv) The steps 2 and 3 produce an orthonormal set of eigenvectors. Use these eigenvectors to form the columns
of 𝑃. The matrix 𝑃 −1𝐴𝑃 = 𝑃 𝑇 𝐴𝑃 = 𝐷 will be diagonal. (The main diagonal entries of 𝐷 are the eigenvalues
of 𝐴.)
34