In the examples we have just considered, we have been able to visualise the linear transformations, and therefore deduce what the eigenvalues and eigenvectors are. But in general, we won’t have a good geometrical description. The main purpose of this section is to develop a systematic method for finding eigenvectors and eigenvalues of a linear transformation.
Let be a linear transformation with associated matrix . Recall that an eigenvector is a non-zero vector subject to for some . This is equivalent to finding non-trivial solutions to the matrix equation
where denotes the zero vector. We learned in Theorem 5.3.5 that this is possible if and only if .
Example 7.2.1.
Let’s do Example 7.1 again, but this time without using our geometric understanding. Let be the reflection about the -axis and suppose that is an eigenvalue for some eigenvector . By the above remarks and Theorem 5.3.5, we must have . In other words,
This is equivalent to , which is equivalent to . So the eigenvalues are , as we saw in Example 7.1. To find the eigenvectors corresponding to these eigenvalues, we consider each case separately, and solve the linear system of equations .
: The augmented matrix associated to this system of equations is:
The solution set to this system is which is, therefore, the eigenspace for the eigenvalue 1.
: The augmented matrix associated to this system of equations is:
The solution set to this system is which is, therefore, the eigenspace for the eigenvalue .
So the procedure consists of two separate parts: (1) Find values which obey the equation (these are the eigenvalues), and then (2) for each eigenvalue, solve the system of linear equations represented by the augmented matrix . The solution sets are the eigenspaces.
The expression is actually a polynomial in , and since it is so important, it has its own name:
Let be a linear transformation of with associated matrix and think of as a variable.
The characteristic polynomial of (or ) is
The characteristic equation of (or ) is the equation
We write (or ) for the characteristic polynomial of (or ).
As an immediate consequence of Theorem 5.3.5, we record the following fact.
The eigenvalues of a linear transformation are the solutions to its characteristic equation.
Example 7.2.4.
If , then the characteristic polynomial is
and the characteristic equation is .
Example 7.2.5.
In Example 7.1, we have
So the characteristic equation has real solutions for if and only if (recall that ). So either , in which case , or , in which case .
Example 7.2.6.
Let be the linear transformation of given by the matrix . That is,
Find the characteristic polynomial, eigenvalues, and eigenspaces of .
Solution: Its characteristic polynomial is
The roots of the characteristic polynomial are the eigenvalues: and . For each of
these, we calculate the eigenspaces:
: The augmented matrix associated to our linear system of equations is
This is equivalent to the equation So the solution set is all vectors of the following form:
Therefore, this is the eigenspace for , and the eigenvectors are the non-zero elements of .
: The augmented matrix associated to our linear system of equations is
This is equivalent to the equation So the solution set is all vectors of the following form:
Therefore, this is the eigenspace for , and the eigenvectors are the non-zero elements of .
For another way of writing these eigenspaces, see Example 7.4.
Example 7.2.7.
Consider the linear transformation of whose associated matrix is
Find the characteristic polynomial, eigenvalues, and eigenspaces.
Solution: Its
characteristic polynomial is
Here we have performed the operations and , which don’t change the determinant as we learned in Section 4. Now we use the cofactor expansion about to obtain:
Therefore, the roots of the characteristic polynomial are , and hence these are the eigenvalues. For each eigenvalue we now compute the corresponding eigenspace . This involves finding the set of solutions to a homogeneous system of linear equations, which has the augmented matrix .
: Row reducing the augmented matrix, we obtain:
Therefore, this system is equivalent to the two equations and . So the solution set is
So this is the eigenspace, and the eigenvectors are all non-zero vectors in .
: As one might have noticed by now, the final column of zeros in the augmented matrix does not change under row operations. For this reason it is common to omit it during calculations, as we will now do. So we will row reduce the matrix :
Therefore, the system is equivalent to the two equations and . The solution set to these equations is given by
So, is the eigenspace, and the eigenvectors are the non-zero vectors in .
: As in the case, we will omit the final column of zeros, and row reduce the matrix :
So this is equivalent to the two equations and . Therefore the solution set is
So, is the eigenspace, and the eigenvectors are the non-zero vectors in .
For another way of writing these eigenspaces, see Example 7.4.