A homogeneous MC has an invariant distribution if
Therefore if then so also , …. In particular, if the initial distribution happens to be invariant, distribution of all remain same. For this reason, the invariant distribution is sometimes called the stationary distribution of the MC.
Calculation of invariant distributions can be done by directly solving
When written out as a set of equations any one of them is a combination of the others, because they all sum to give , or equivalently . Thus any one equation is redundant and can be removed.
The equations are also homogeneous, i.e. given any solution (other than ), it is also a solution when multiplied by any constant. For that reason we have to solve the equation together with the condition that .
gives the equation or . Thus
The last step was simply to divide by the sum of the components of a solution proportional to that required.
This illustrates a useful approach to obtaining . First solve or equivalently by setting some element of to . After solving for the whole vector , divide it by the sum of its elements to obtain .
Invariant distribution may not always exist, that is, it may not be possible to solve . A chain always has one (at least on a finite state space), and may have more than one invariant distribution. If and are two different invariant distributions, then another is (for ). In later sections we will discuss conditions for existence and uniqueness of the invariant distribution.
Consider example (ii) from 4.2.2.
The states of this chain are reducible into . Using the previous Example 4.4.3, the chain on states 1 and 4 has invariant distribution , and the chain on states 2 and 3 has invariant distribution . Let the total probability of starting in states 1 and 4 be ; the invariant distribution of the chain is
Find the invariant distribution for the no claims bonus.
The equation gives, in this case,
To solve this let then , , and . So .
A homogeneous MC has an asymptotic distribution if whatever the initial distribution .
Let and
We find expression for as follows. Using ,
We use here just the first component of the matrix equation since .
Recall the technique for solving difference equations. The auxilliary equation has solution . Hence solution to the homogeneous equation is .
To find the particular solution to the non-homogeneous equation, try , a constant. Then implies, . Hence the general solution is
Substituting , . Hence
Since , and so, provided and are not both 0 or both 1, the term as . Therefore and
whatever the value of . Observe that this is the invariant distribution of Example 4.4.3.
Next we consider some sufficient conditions under which a solution to exists.
A sufficient condition for to be the invariant distribution is given by the so-called detailed-balance equations:
(These equations do not always hold; but when they do they are much easier to solve. Also, can be interpreted as the flow from to .)
Sum the left and the right-hand sides over :
If then
This is true so . ∎
Use detailed balance to calculate the invariant distribution of the 4-state Markov chain with transition matrix
Note that the detailed-balance equations hold automatically for and for those pairs of for which . Hence we only need to consider with , namely,
Hence , so .
If the transition matrix satisfies detailed balance then the Markov chain is said to be reversible at equilibrium or, more simply reversible.
At equilibrium the system should behave the same going forward in time as it does going backwards in time so that someone observing a video of the stochastic process would be unable to tell whether or not it was being shown backwards.
Any transition kernel of the form
satisfies detailed balance. Note that this is a matrix with for .
We will prove a very similar theorem for continuous-time Markov chains (Theorem 5.4.2) and so will omit the proof here. However the reason why the theorem holds is that detailed balance requires exactly that
i.e. there are no other equations that need to be satisfied. Together with the fact that we have equations in unknowns which can be solved.