Revision

This stochastic processes course builds on the MATH230 probability course. The following is a brief reminder of some of the important topics from that course which we will be using.

For the majority of the course we will use probability results for discrete random variables or events.

Discrete Random Variables

  • (i)

    Rules for Conditional Probability. For events A, B, and C.

    P(A|B)=P(A,B)P(B),
    P(A,B|C)=P(A|B,C)P(B|C).
  • (ii)

    Law of Total Probability. Consider a partition of the sample space A1,A2,,Ak. Then for any event B

    P(B)=i=1kP(B,Ai)=i=1kP(B|Ai)P(Ai).

Expectations

Consider a random variable X which takes values xi with probability pi.

  • (i)

    Function of a random variable.

    E(h(X))=h(xi)pi.
  • (ii)

    Linearity. For constants a and b

    E(aX+b)=aE(X)+b.
  • (iii)

    Products. If Y is a random variable which is independent of X, then

    E(XY)=E(X)E(Y).
  • (iv)

    Conditional Expectation. For two random variables X and Y,

    E(h(X)|Y=y)=h(xi)P(X=xi|Y=y),

    that is the conditional expectation is obtained by taking expectations with respect to the conditional distribution of X given Y=y.

When analysing continuous time Markov chains we will use ideas related to those for continuous random variables.

Continuous Random Variables

Consider a continuous random variable X with F(x)=P(Xx). An important idea is the pdf f(x) of X. This satisfies

f(x)=ddxF(x).

f(x) is not the probability that X=x, rather it is related to probability via

P(X(x,x+h))f(x)h,

for small h.

Other Work

In this course we will also use:

  • (i)

    Binomial Expansion

    (1+az)n=1+naz+n(n-1)2(az)2++n(n-1)(n-k+1)k!(az)k+

    which, for integer n, simplifies to

    i=0n(ni)(az)i
  • (ii)

    Matrices. The use of matrices is important for analysing Markov chains. If we have an n×n matrix P and an 1×n vector π=(π1,,πn), then the rules for multiplying matrices give:

    (πP)i=k=1nπkPki,
    (P2)ij=k=1nPikPkj.