Suppose that . Show that
where
Hence, show that
where .
Firstly
So
Next
Therefore
Provided or , and so and the -step transition matrix converges to
The rate of convergence is . Discuss the behaviour of the
chain when
(i) , (ii) , and (iii) .
. Chain converges by alternating.
. Chain converges monotonically.
. Chain enters invariant distribution in 1 step.
Let the -state transition kernel have eigenvalues . If either or both of the following hold
the eigenvalues are distinct (i.e. for ),
is reversible with respect to a stationary distribution ,
then it is possible to decompose as , where is a square matrix and
If the transition matrix satisfies either or both of the conditions of Theorem 4.10.2 then the -step transition matrix is
This corollary is very powerful, but actually finding the eigenvalues and the matrix can be time-consuming. Fortunately R (or other mathematical software) can do much of the hard work.
NB: Note the general form: is a linear combination of .
Consider a Markov chain with the following transition matrix
Use R to find the geometric rate of convergence, the asymptotic distribution and the general formula for the probability that a chain which was started in state 1 will be in state 2 after time-steps.
> P<-matrix(data=c(0.1,0.6,0.3,0.1,0.8,0.1,0.3,0.1,0.6),byrow=T,nrow=3) > P [,1] [,2] [,3] [1,] 0.1 0.6 0.3 [2,] 0.1 0.8 0.1 [3,] 0.3 0.1 0.6 > a<-eigen(P) # find eigenvalues and matrix A > a$values [1] 1.00000000 0.57015621 -0.07015621
The geometric rate of convergence is therefore .
> A<-a$vectors > D<-diag(a$values) > A [,1] [,2] [,3] [1,] -0.5773503 0.04837303 0.91438384 [2,] -0.5773503 -0.41610883 -0.05905442 [3,] -0.5773503 0.90802725 -0.40051813 > D [,1] [,2] [,3] [1,] 1 0.0000000 0.00000000 [2,] 0 0.5701562 0.00000000 [3,] 0 0.0000000 -0.07015621 > solve(A) [,1] [,2] [,3] [1,] -0.2635729 -1.0166385 -0.4518393 [2,] 0.2358878 -0.9083522 0.6724644 [3,] 0.9147313 -0.5938609 -0.3208704 > A %*% D %*% solve(A) [,1] [,2] [,3] [1,] 0.1 0.6 0.3 [2,] 0.1 0.8 0.1 [3,] 0.3 0.1 0.6 >
By definition the chain approaches its asymptotic distribution whatever the initial state. Without loss of generality let us assume that it starts in state 1.
As
so
We can use R to do this calculation.
> A[1,1]*solve(A)[1,] [1] 0.1521739 0.5869565 0.2608696
Finally we consider the probability that a chain which started in state 1 will be in state 2 after steps.
NB: Recall the general form: , a linear combination of the powers of the eigenvalues.
R can be used to obtain the coefficients in the above example.
> A[1,]*solve(A)[,2] [1] 0.58695652 -0.04393975 -0.54301677
Consider a Markov chain with the following transition matrix
The eigenvalues of are . Find the invariant distribution and explain why this is the asymptotic distribution. Hence find the general formula for .
Note that unlike the previous example, we are not given a decomposition. The Markov chain is irreducible and aperiodic so it has a limiting distribution which is the unique invariant distribution.
Since the matrix is tridiagonal we can use detailed balance to find the invariant distribution.
So hence .
Now
But , , . So
Thus and . The second equation simplifies to . Adding this to the first equation gives , from which . Therefore
NB: If we had not wished to find the asymptotic distribution we could simple have found and solved the slightly different set of simultaneous equations.