Home page for accesible maths

Style control - access keys in brackets

Font (2 3) - + Letter spacing (4 5) - + Word spacing (6 7) - + Line spacing (8 9) - +

Chapter 5 Spectral decomposition

3cm It is artificial to divide mathematics into separate chunks, and then to say that you bring them together as though this is a surprise. On the contrary, they are all part of the mathematical puzzle.

– Michael Atiyah (1929 - )

Fields medal and Abel prize winner

In this chapter, we will almost exclusively consider the vector space n, equipped with the standard inner product given by the scalar product. If x,yn are written as column vectors, then notice that the inner product may be expressed as follows (see Exercise 3.10):

x,y=xTy.

The main result of this chapter is the spectral decompostion, Theorem 5.7, which is one of the primary reasons we spent so much effort computing eigenvalues and eigenvectors in MATH105. The spectral decomposition is used in a variety of contexts, notably in statistics, such as the study of Markov chains (see MATH332), and principal component analysis, which decomposes the covariance matrix by changing to a basis of uncorrelated variables (see MATH330 or MATH451); it is also used in pure mathematics, where it has been generalized to infinite dimensional Hilbert spaces (see MATH317 and MATH411), or combinatorics, where spectral graph theory is used to study a graph based on the eigenvalues of its adjacency matrix (see MATH327).

For us, the spectral decomposition is only valid for real symmetric matrices. For matrices which are either not symmetric or not real, we develop a procedure to understand them in Chapter 6 using Jordan normal forms.