2 Week 7: Introduction to MCMC

2.1 Introduction and background

Markov Chain Monte Carlo, often referred to simply as MCMC, has probably been the single most important advance in statistics over the last 25 years or so. MCMC has its origins in Metropolis et al.Β (1953) and Hastings (1970). However, it was only in the late 1980’s with the seminal paper Gelfand and Smith (1990) that the MCMC idea began to invade mainstream statistics and a considerable amount of research has been devoted to it ever since.

MCMC utilises a Monte Carlo method based upon Markov chains (as the name suggests!) MCMC is primarily used in Bayesian statistics to provide samples from the posterior distribution.

We cover three preliminary topics before discussing MCMC. These are:-

  1. 1.

    Monte Carlo methods;

  2. 2.

    Markov chains

  3. 3.

    Bayesian Statistics (very brief overview – details in MATH553).

2.2 Monte Carlo Methods

Monte Carlo methods basically means simulation. These are used primarily to evaluate integrals (for random variables) where analytical solutions are either not possible or extremely laborious.

The simplest example is the case of the expectation

𝔼⁒[X]=∫x⁒f⁒(x)⁒𝑑x

of a random variable X which has no analytical form. We can estimate the mean of X, 𝔼⁒[X] by taking a sample from the distribution X and using the sample mean as an estimate of the theoretical mean. That is:-

  1. 1.

    Take a sample of size n, x1,x2,…,xn from the distribution X, f⁒(β‹…).
    We assume that the observations are independent.

  2. 2.

    Then 𝔼⁒[X]β‰ˆ1nβ’βˆ‘i=1nxi.

For example, X could represent the height (in cm) of a 5 year old child. An estimate of 𝔼⁒[X] can then be obtained by taking a class of 20, 5 year old children and measuring their heights.

This argument can be generalised. Suppose that we wish to calculate/evaluate, ΞΈ=𝔼⁒[ϕ⁒(X)], for some function ϕ⁒(β‹…)

ΞΈ=βˆ«Ο•β’(x)⁒f⁒(x)⁒𝑑x.

Then if we take a sample x1,x2,…,xn from the distribution X, we have that

ΞΈ^=1nβ’βˆ‘i=1nϕ⁒(xi)

is an unbiased estimate of ΞΈ. This process is exactly the same as for the expectation, in that, we use the sample mean to estimate the theoretical mean. This approach is very easy to use, even for multi-dimensional distributions.

Properties of Monte Carlo integration

  1. 1.

    Unbiased.

  2. 2.

    If variance of ϕ⁒(X) exists, then

    Var⁒[ΞΈ^]=1n⁒Var⁒[ϕ⁒(X)].
  3. 3.

    Central limit theorem

    n⁒(ΞΈ^-ΞΈ)Var⁒[ϕ⁒(X)]⟢DN⁒(0,1)

    That is,

    ΞΈ^β‰ˆN⁒(ΞΈ,Var⁒[ϕ⁒(X)]/n).

All the above hold provided {Xi} are independent. However, these results can be extended to the case where the {Xi} are dependent which will be the case when using MCMC.

Drawbacks of Monte Carlo integration

  1. 1.

    The variance of ϕ⁒(X) is often large.
    This problem can be circumvented to some extent by variance reduction methods such as importance sampling.

  2. 2.

    It can be difficult to simulate from X (in an efficient manner).
    This is a harder problem to deal with and this is where MCMC will be useful. Simulating from univariate distributions (rejection sampling, inversion of the cdf) is usually fairly straightforward but for multivariate distributions the problem becomes a lot harder as methods such as rejection sampling perform much worse as the dimension of the problem grows. This means a lot of work is required to get even a small sample from the distribution X.

2.3 Markov Chains

There is a lot of deep theory related to the use of Markov chains in Monte Carlo methods and we shall barely scratch the surface.

What is a Markov Chain?
A Markov chain {Xn,nβ‰₯0} is a stochastic process which satisfies the β€˜memoryless’ property:

P(Xn∈A|Xn-1=xn-1,Xn-2=xn-2,…,X0=x0)=P(Xn∈A|Xn-1=xn-1),

where Xn denotes the state of the process after n steps. Basically, the future is independent of the past, given the present state of the process. A good introduction to the theory of Markov Chains is given in Grimmett and Stirzaker (1992), Section 6. We restrict attention in our discussion to discrete, time-homogeneous Markov chains, that is, for all A, i and n,

P(Xn+1∈A|Xn=i)=P(X1∈A|X0=i).

A very simple example of a Markov chain is the game Snakes and Ladders. Your future progress only depends upon your current position and not how you have arrived at your current position.

We shall illustrate the basic ideas, assuming that the state space for X is countable, although in most examples of MCMC, the state space (set of possible values) of the parameters will be ℝd which is uncountable. For all i,j, let

Pi⁒j=P(X1=j|X0=i).

We shall assume that the Markov chain is time homogeneous, that is, for all nβ‰₯1,

P(Xn=j|Xn-1=i)=Pi⁒j,

the probability of moving from state i to state j in one step. Suppose that the state space is finite. Let P denote the matrix with elements {Pi⁒j}. Then the (i,j)t⁒h element of Pn gives P(Xn=j|X0=i).

Snakes and Ladders Example

Suppose that we are currently at square 37 after n turns and that there is a ladder at square 40 leading to square 62 and a snake at square 42 leading to square 27. Then rolling a standard 6 sided die, we have

P(X(n+1)=38|Xn=37) = 16
P(X(n+1)=39|Xn=37) = 16
P(X(n+1)=62|Xn=37) = 16
P(X(n+1)=41|Xn=37) = 16
P(X(n+1)=27|Xn=37) = 16
P(X(n+1)=43|Xn=37) = 16

depending upon which number we roll from 1 to 6.

A Markov chain is
irreducible; if for all i and j, there exists some n such that P(Xn=j|X0=i)>0;
(we can get from any state to any other state)
aperiodic; if the greatest common divisor of {n;P(Xn=j|X0=i)>0}=1;
(does not exhibit periodic behaviour)
recurrent; if P⁒(Markov chain returns to ⁒i|it started at ⁒i)=1 for all i;
(we eventually return to the starting state)
and positive recurrent; if it is irreducible, aperiodic and recurrent, and there exists a unique collection of probabilities {Ο€j} such that for all i,

limnβ†’βˆžP(Xn=j|X0=i)=Ο€j (2.1)

and

βˆ‘iΟ€i⁒Pi⁒j=Ο€j.

The distribution 𝝅 is called the stationary distribution of the Markov chain. Note that (2.1) says, that regardless of the value of X0, for large n, P(Xn=j|X0=i)β‰ˆΟ€j. That is, for large n, Xn is (approximately) distributed to according to the stationary distribution. How large n needs to be for the approximation to be close depends upon the starting value and the rate of convergence. The rate of convergence can be calculated using probability theory, and depends upon how

supx∈S,A∈S|P(Xn∈A|X0=x)-Ο€(A)|

behaves as nβ†’βˆž, where S is the state space of the parameters (often ℝd or a subset thereof). In practice with MCMC convergence is often β€˜detected’ by eye without any formal procedures.

A key point is that if, for all i, P(X0=i)=Ο€i, then for all nβ‰₯1 and i, P(Xn=i)=Ο€i. Therefore if X0 is distributed according to 𝝅, then for all nβ‰₯1, Xn is distributed according to 𝝅 (the stationary distribution). Also 𝝅=𝝅⁒P in matrix notation.

Example. Suppose that we have four states labeled 1, 2, 3, 4.

Transition matrix

P=(p11p12p13p14p21p22p23p24p31p32p33p34p41p42p43p44)

Then pi⁒j is the probability of moving from state i to state j. That is,

P(X1=j|X0=i)=pi⁒j.

Note that all the rows must sum to 1, i.e.Β βˆ‘j=14pi⁒j=1.

We consider different transition matrices describing the transitions from one time point to the next.

Matrix 1.

P=(0.50.5000.30.7000.250.250.250.2500.80.20)

This is an example of a reducible Markov chain since we can’t get from state 2 to state 3. Also the Markov chain is not recurrent since if you start in state 3, the probability of being in state 3 at any later time does not exceed 0.5. (If you go to state 1 or 2 at the first step you will never return to state 3.) The probability of ever returning to state 3 is 0.3.

Matrix 2.

P=(00.500.50.300.7000.2500.750.800.20)

This is periodic, it returns to a given state after an even number of steps. (It alternates between odd states, 1 and 3, and even states, 2 and 4.)

Matrix 3.

P=(0.10.500.40.20.60.10.101000.400.250.35)

This Markov chain is positive recurrent with 𝝅=(0.2,0.5,0.1,0.2) such that

𝝅=𝝅⁒P.

For example,

βˆ‘j=14Ο€j⁒Pj⁒2 = (0.2Γ—0.5)+(0.5Γ—0.6)+(1Γ—0.1)+(0Γ—0.2)
= 0.5=Ο€2

The aim of MCMC is to construct a Markov chain whose stationary distribution is the posterior distribution that we are interested in. Therefore a sample from the Markov chain (provided the Markov chain is started in stationarity) will form a sample from the posterior distribution of interest. It is important to check that the Markov chain we construct is irreducible and aperiodic, in particular the irreducibility of the Markov chain.

Detailed balance

A discrete Markov chain with transition matrix P is said to satisfy detailed balance if there is a distribution 𝝂 such that for all i,j,

νi⁒Pi⁒j=νj⁒Pj⁒i.

Lemma If a Markov chain with transition matrix P satisfies detailed balance with distribution 𝝂 then 𝝂(=𝝅) is the stationary distribution of the chain.
Proof: Suppose that 𝝂 satisfies detailed balance. Then for any i,

βˆ‘jΞ½j⁒Pj⁒i = βˆ‘jΞ½i⁒Pi⁒j
= Ξ½iβ’βˆ‘jPi⁒j=Ξ½i,

since for all i, βˆ‘jPi⁒j=1. (All rows sum to 1.) Hence 𝝂 is the stationary distribution of the Markov chain. β–‘

Checking detailed balance is often an easy way of confirming a probability distribution 𝝅 is the stationary distribution of a Markov chain.

Continuous state space Markov chains

Usually, but not always, in Bayesian statistics we want to obtain samples from a continuous distribution (or a distribution which is a mixture of continuous and discrete variables). Most of the above discussion for discrete state space Markov chains holds over to continuous state space Markov chains in very natural ways with minor modifications. In this short Section, we briefly outline the key differences and in particular the influence this will have on the MCMC discussion which follows.

Firstly, we replace the transition matrix P by a transition kernel K. Specifically, if we let π•Š denote the set of values the Markov chain can take (π•Š will often be ℝd or some subset of it), for all x,yβˆˆπ•Š, we have a density k⁒(x,y) such that

∫k⁒(x,y)⁒𝑑y=1

and for a set AβŠ†π•Š, the probability of moving from state x to a state y∈A is

∫Ak⁒(x,y)⁒𝑑y.

For a continuous distribution the probability of ever returning to a state x is 0. (For any continuous distribution the probability of observing any single value is 0.) Therefore for recurrence of a continuous state space Markov chain, we talk in terms of returning to a set A. We won’t go into the details but for any xβˆˆπ•Š, we could define a set of values, Cx say, close to x, for example within a certain distance of x and look at the time taken by the Markov chain to return to x.

Secondly, we can still talk in terms of a stationary distribution Ο€, but Ο€ will now be a probability density function. The stationary distribution Ο€ satisfies

βˆ«Ο€β’(y)⁒k⁒(y,x)⁒𝑑y=π⁒(x),

where in moving from discrete distributions to continuous distributions we have replaced the summation by integration and the transition matrix by the transition kernel. Similarly, detailed balance is given by

π⁒(x)⁒k⁒(x,y)=π⁒(y)⁒k⁒(y,x).

2.4 Bayesian Statistics

In classical (or frequentist) statistics given a parametric model, we assume that the parameters of the model are fixed, typically unknown, constants. We can then use data to make inference about the parameters, often through the likelihood function. That is, find the maximum likelihood estimate (MLE) of the parameters. We can also find standard errors and confidence intervals for the MLE. See for example the Week 6 notes on the EM algorithm.

Bayesian statistics takes its name from Thomas Bayes. Again we can assume a parametric model for the data. However, rather than assuming that the parameters are fixed constant, we now assume that they are random variables incorporating uncertainty about the parameters. Moreover, we can make use of any prior knowledge we (or others) may have about the parameter values. This is an important feature of Bayesian statistics, the presence of a prior distribution for the parameters. These priors can be either vague or informative, reflecting how much prior knowledge we may have concerning the parameters. The posterior distribution of the parameters is what we are interested in. The posterior distribution for the parameters is dependent upon the parametric model chosen, the data and the prior distribution. From the posterior distribution, we can obtain the modal value for the parameters (these will be very close to the MLE values, however, depending upon the choice of prior distribution these will rarely agree exactly), the mean of the parameters and the variance (covariance) of the parameters. Thus we can obtain a great deal of information from the posterior distribution and there is no need to construct confidence intervals to measure uncertainty in the parameter values as these are obtained directly from the posterior distribution. Note that as the amount of data increases, the effects of the prior will diminish and in the limit as we obtain infinite data, the choice of prior will be inconsequential.

We now consider Bayesian statistics in action and give some motivation of why MCMC methods have been so successful. Consider a parametric model with parameter(s) ΞΈ. Let π⁒(ΞΈ) denote the pdf (probability density function) of the prior distribution of ΞΈ. Let 𝐱=(x1,x2,…,xn) denote the observed data and let π⁒(𝐱,ΞΈ) denote the joint pdf of the data and the parameters. Then by Bayes Theorem,

π⁒(𝐱,ΞΈ) = π⁒(𝐱|ΞΈ)⁒π⁒(ΞΈ)
= π⁒(ΞΈ|𝐱)⁒π⁒(𝐱).

We are interested in the posterior distribution of ΞΈ, that is, the distribution of ΞΈ given the data 𝐱, ie. π⁒(ΞΈ|𝐱). Therefore

π⁒(ΞΈ|𝐱)=π⁒(𝐱|ΞΈ)⁒π⁒(ΞΈ)/π⁒(𝐱).

Since the likelihood L⁒(ΞΈ;𝐱)=π⁒(𝐱|ΞΈ), we have that

π⁒(ΞΈ|𝐱)∝likelihoodΓ—prior. (2.2)

(Note that π⁒(𝐱) does not depend upon ΞΈ.) The likelihood and prior are often relatively easy to obtain, and in such circumstances we can find the posterior distribution up to a constant of proportionality. That is,

π⁒(ΞΈ|𝐱)=KΓ—likelihoodΓ—prior,

where K needs to be computed. The computing of K(=1/π⁒(𝐱)) is often very difficult, if at all analytically possible. This has been a major stumbling block for Bayesian statistics. Fortunately, MCMC techniques can be applied directly to (2.2) to obtain samples from the posterior distribution, thus circumventing the need to compute K. In MATH553 you will see alternatives to MCMC for getting around the problem of computing K.

Gaussian Example.

This simple example produces a nice analytical answer.

Suppose that we have independent and identically distributed data according to a random variable X∼N⁒(μ,1), where μ is an unknown parameter. We want to estimate μ.

Prior distribution: Suppose that we know nothing about ΞΌ and take π⁒(ΞΌ)∝1, ie. ∫-βˆžβˆžΟ€β’(ΞΌ)⁒𝑑μ=∞. This is an improper prior since it does not integrate to 1 over the possible values of ΞΌ. However, often (but not always) improper prior distributions give rise to proper posterior distributions, so this is not a problem.

Suppose that x1,x2,…,xn are independent realisations of X.

Then

f⁒(x|ΞΌ)=12⁒π⁒exp⁑(-(x-ΞΌ)2/2),

and

L⁒(μ;𝐱) = ∏i=1nf⁒(xi|μ)
= ∏i=1n12⁒π⁒exp⁑(-(xi-ΞΌ)2/2)
= 1(2⁒π)n⁒exp⁑(-12β’βˆ‘i=1n(xi-ΞΌ)2).

Therefore

π⁒(ΞΌ|𝐱) ∝ 1(2⁒π)n⁒exp⁑(-12β’βˆ‘i=1n(xi-ΞΌ)2)Γ—1
∝ exp⁑(-12⁒(βˆ‘i=1nxi2-2β’ΞΌβ’βˆ‘i=1nxi+n⁒μ2))
∝ exp⁑(-12⁒(n⁒μ2-2⁒μ⁒n⁒x¯)),

where xΒ―=1nβ’βˆ‘i=1nxi.

A very useful result. Let Y have probability density function g⁒(y). Then if

g⁒(y)∝exp⁑(-12⁒(A⁒y2-2⁒B⁒y)),

then Y∼N⁒(B/A,1/A).

Thus

μ|𝐱∼N⁒(x¯,1n).

Therefore the mean of μ is the sample mean (which is the MLE). Also we have that the variance is 1/n(=Var⁒[X]/n) which agrees the variance of the MLE. In other words, as the sample size grows, we obtain tighter and tighter estimates of μ.

Conjugate priors

The choice of prior plays an important part in Bayesian statistics. There are two key features, how informative the priors are and conjugacy. We illustrate this idea with an extension of the Gaussian example above to the case where both the mean ΞΌ and variance Οƒ2 are unknown. The choice of priors, and in particular, conjugacy will be covered in detailed in MATH553.

For 1≀i≀n, let Xi|ΞΌ,Οƒ2∼N⁒(ΞΌ,Οƒ2). Suppose also that we have the following prior distributions for ΞΌ and Οƒ2:

π⁒(ΞΌ)∼N⁒(Ξ»0,1/Ο‰0)
π⁒(Οƒ-2)∼Gamma⁒(Ξ±0,Ξ³0),

where ΞΌ and Οƒ are assumed to be apriori independent and Ξ»0,Ο‰0,Ξ±0 and Ξ³0 are assumed to be known hyperparameters. Hyperparameters are simply the parameters of the prior distributions of the model parameters. Letting Ο„=Οƒ-2, (Ο„ is known as the precision and is the inverse of the variance) we have that, f⁒(xi|Ο„,ΞΌ)=Ο„/(2⁒π)⁒exp⁑(-τ⁒(xi-ΞΌ)2/2), so

π⁒(ΞΌ,Ο„|𝐱)∝∏i=1n{Ο„1/2⁒exp⁑(-τ⁒(xi-ΞΌ)2/2)}Γ—exp⁑(-Ο‰0⁒(ΞΌ-Ξ»0)2/2)⁒τα0-1⁒e-Ξ³0⁒τ.

Therefore, although the form of the prior distribution is highly tractable, the posterior distribution is a complex two dimensional distribution. This is often the case for multidimensional parameter sets.

However, if we consider the conditional distributions of each of the parameters in turn, these turn out to be rather nice. We exploit the following two observations which are useful for deriving conditional distributions:

  1. 1.

    Suppose that the random variable Y has a pdf of the form,

    f⁒(y)∝∏j=1mexp⁑(-Ο•j2⁒(y-Ξ΄j)2),

    then

    Y∼N⁒(βˆ‘j=1mΟ•j⁒δjβˆ‘j=1mΟ•j,1βˆ‘j=1mΟ•j).

    That is, if the pdf of Y is composed as the product of Normal densities then Y has a Normal density with more weight given to those components with larger precisions (smaller variances).

  2. 2.

    Suppose that the random variable W has a pdf of the form,

    g⁒(w)∝wA-1⁒exp⁑(-B⁒w),

    then

    W∼Gamma⁒(A,B).

Firstly, to consider the conditional distribution of ΞΌ|Ο„,𝐱 we only need to focus upon terms involving ΞΌ. Therefore

π⁒(ΞΌ|Ο„,𝐱) ∝ ∏i=1n{exp⁑(-Ο„2⁒(xi-ΞΌ)2)}⁒exp⁑(-Ο‰02⁒(ΞΌ-Ξ»0)2)
∼ N⁒(Ο„β’βˆ‘i=1nxi+Ξ»0⁒ω0n⁒τ+Ο‰0,1n⁒τ+Ο‰0).

Similarly, for of Ο„|ΞΌ,𝐱 we only need to focus upon terms involving Ο„, giving

π⁒(Ο„|ΞΌ,𝐱) ∝ ∏i=1n{Ο„1/2⁒exp⁑(-τ⁒(xi-ΞΌ)2/2)}×τα0-1⁒e-Ξ³0⁒τ
= τα0+n/2-1exp(-Ο„(12i=1n(xi-ΞΌ)2+Ξ³0))
∼ Gamma⁒(Ξ±0+n2,Ξ³0+12β’βˆ‘i=1n(xi-ΞΌ)2).

Thus the conditional distributions of each of the parameters has a nice simple form. This is called conditional conjugacy.

Therefore we will proceed by introducing the Gibbs Sampler which is applicable, where the conditional distributions of each parameter given the other parameters is known but the full joint distribution of the parameters is not known explicitly.

2.5 Gibbs Sampler algorithm

Suppose that we wish to obtain a sample from the multivariate posterior distribution π⁒(𝜽|𝐱), where 𝜽=(ΞΈ1,ΞΈ2,…,ΞΈd) denotes the parameters of the model and 𝐱=(x1,x2,…,xN) denotes the observed data. The Gibbs sampler does this by successively and repeatedly simulating from the conditional distributions of each component given the other components. This procedure is particularly useful where we have conditional conjugacy, so that the resulting conditional distributions are from standard distributions.

Gibbs Sampler algorithm

  1. 1.

    Initialise with 𝜽(0)=(ΞΈ1(0),…,ΞΈd(0)).

  2. 2.

    For i=1,2,…,n,

    1. (a)

      Simulate ΞΈ1(i) from the conditional ΞΈ1|(ΞΈ2(i-1),…,ΞΈd(i-1)),𝐱

    2. (b)

      Simulate ΞΈ2(i) from the conditional ΞΈ2|(ΞΈ1(i),ΞΈ3(i-1),…,ΞΈd(i-1)),𝐱

    3. (c)

      …

    4. (d)

      Simulate ΞΈd(i) from the conditional ΞΈd|(ΞΈ1(i),…,ΞΈd-1(i)),𝐱

  3. 3.

    Discard the first k iterations and estimate summary statistics of the posterior distribution using 𝜽(k+1),𝜽(k+2),…,𝜽(n).

How does it work?

Suppose that 𝜽 comes from π⁒(𝜽|𝐱). Then for i=1,2,…,d, updating the it⁒h component involves drawing a new value of ΞΈi, ΞΈiβ€² from ΞΈi|𝜽-i,𝐱. Thus πœ½β€², the updated set of parameters with ΞΈi replaced by ΞΈiβ€² also comes from the posterior distribution of 𝜽.

Thus provided 𝜽(0) is drawn from the posterior distribution of 𝜽; 𝜽(1),𝜽(2),…,𝜽(n) are samples from the posterior distribution. Note that we have a dependent sample.

2.6 Burn in

We have to specify initial values for 𝜽. If 𝜽(0) was drawn from the stationary distribution of the Markov chain (distribution we are interested in), then (ΞΈ1(0),…,ΞΈd(0)),…,(ΞΈ1(n),…,ΞΈd(n)) would be realisations from the stationary distribution, π⁒(𝜽|𝐱). However, we don’t know what the (stationary) distribution is. (If we knew the distribution of 𝜽=(ΞΈ1,ΞΈ2,…,ΞΈd), there would be no need for MCMC to simulate from it!) Thus typically 𝜽(0) will not be drawn from the stationary distribution. As k increases 𝜽(k) increasingly forgets, the initial value 𝜽(0), and consequently, for large k, 𝜽(k) is approximately from the stationary distribution. Hence, (ΞΈ1(k+1),…,ΞΈd(k+1)),…,(ΞΈ1(n),…,ΞΈd(n)) are approximately from the stationary distribution and thus are considered a sample thereof. In some circumstances it is possible to use perfect simulation in which case 𝜽(0) is drawn from the stationary distribution even though the stationary distribution is unknown. Perfect simulation is not covered in this course and is only viable in some special cases.

2.7 Gaussian Example

We recap the normal example given in the Section 2.4. Suppose that x1,x2,…,xn are independent and identically distributed according to X∼N⁒(ΞΌ,1/Ο„), where ΞΌ and Ο„ are unknown parameters to be estimated. The following prior distributions are assigned to ΞΌ and Ο„:-

π⁒(ΞΌ) ∼ N⁒(Ξ»0,1/Ο‰0)
π⁒(Ο„) ∼ Gamma⁒(Ξ±0,Ξ³0).

We have the following Gibbs sampler algorithm for obtaining a sample of size N from the joint distribution ΞΈ=(ΞΌ,Ο„).

  1. 1.

    Initialise with ΞΈ=(ΞΌ(0),Ο„(0)).
    Any values of ΞΌ0 and Ο„0>0 will be OK. However a reasonable start would be the sample mean and inverse of the sample variance. That is, ΞΌ(0)=1nβ’βˆ‘i=1nxi and Ο„(0)=1/(1n-1β’βˆ‘i=1n(xi-ΞΌ(0))2).

  2. 2.

    For k=1,2,…,N:-

    1. (a)

      Simulate ΞΌ(k)|Ο„(k-1)=Ο„,𝐱∼N⁒(Ο„β’βˆ‘i=1nxi+Ξ»0⁒ω0n⁒τ+Ο‰0,1n⁒τ+Ο‰0)

    2. (b)

      Simulate Ο„(k)|ΞΌ(k)=ΞΌ,𝐱∼Gamma⁒(Ξ±0+n2,Ξ³0+12β’βˆ‘i=1n(xi-ΞΌ)2).

The Gibbs sampler was run to produce a sample of size 1100 from the joint distribution of ΞΌ and Ο„. We discard the first 100 samples as burn-in using the remaining 1000 samples to estimate ΞΌ and Οƒ(=1/Ο„).

means⁒dΞΌ2.6250.287Οƒ2.1960.229

Unnumbered Figure:Β Link

Unnumbered Figure:Β Link

2.8 Gibbs sampler properties

Verification

We will show that π⁒(ΞΈ) is indeed the stationary distribution of the Gibbs sampler in the case where d=2. The proof for d>2 is similar but more long-winded.

Let Ο€1⁒(ΞΈ1|ΞΈ2)=π⁒(ΞΈ1,ΞΈ2)/π⁒(ΞΈ2) and let Ο€2⁒(ΞΈ2|ΞΈ1)=π⁒(ΞΈ1,ΞΈ2)/π⁒(ΞΈ1), the conditional distributions of ΞΈ1 and ΞΈ2, respectively. Suppose that ΞΈ0=(ΞΈ10,ΞΈ20) is drawn from π⁒(β‹…). Then the pdf of ΞΈ1 given ΞΈ0 is

f⁒(ΞΈ1|ΞΈ0)=Ο€1⁒(ΞΈ11|ΞΈ20)⁒π2⁒(ΞΈ21|ΞΈ11).

We want to show that the pdf of ΞΈ1, f⁒(β‹…) is equal to π⁒(β‹…).

For any set A,

P(ΞΈ1∈A) = ∫1{ΞΈ1∈A}⁒f⁒(ΞΈ1)⁒𝑑θ1
= ∫∫1{ΞΈ1∈A}⁒f⁒(ΞΈ1|ΞΈ0)⁒π⁒(ΞΈ0)⁒𝑑θ1⁒𝑑θ0
= ∫∫∫∫1{(ΞΈ11,ΞΈ21)∈A}⁒π1⁒(ΞΈ11|ΞΈ20)⁒π2⁒(ΞΈ21|ΞΈ11)⁒π⁒(ΞΈ10,ΞΈ20)⁒𝑑θ11⁒𝑑θ21⁒𝑑θ10⁒𝑑θ20
= ∫∫∫1{(ΞΈ11,ΞΈ21)∈A}⁒π1⁒(ΞΈ11|ΞΈ20)⁒π2⁒(ΞΈ21|ΞΈ11)⁒[βˆ«Ο€β’(ΞΈ10,ΞΈ20)⁒𝑑θ10]⁒𝑑θ11⁒𝑑θ21⁒𝑑θ20
= ∫∫∫1{(ΞΈ11,ΞΈ21)∈A}⁒π1⁒(ΞΈ11|ΞΈ20)⁒π2⁒(ΞΈ21|ΞΈ11)⁒π⁒(ΞΈ20)⁒𝑑θ11⁒𝑑θ21⁒𝑑θ20
= ∫∫1{(ΞΈ11,ΞΈ21)∈A}⁒π2⁒(ΞΈ21|ΞΈ11)⁒[βˆ«Ο€1⁒(ΞΈ11|ΞΈ20)⁒π⁒(ΞΈ20)⁒𝑑θ20]⁒𝑑θ11⁒𝑑θ21
= ∫∫1{(ΞΈ11,ΞΈ21)∈A}⁒π2⁒(ΞΈ21|ΞΈ11)⁒π1⁒(ΞΈ11)⁒𝑑θ11⁒𝑑θ21
= ∫∫1{(ΞΈ11,ΞΈ21)∈A}⁒π⁒(ΞΈ11,ΞΈ21)⁒𝑑θ11⁒𝑑θ21
= ∫1{ΞΈ1∈A}⁒π⁒(ΞΈ1)⁒𝑑θ1

as required.

Observations

We make a couple of minor observations about the Gibbs sampler and its output which are very useful in practice.

Firstly, suppose that we interested in the marginal distribution of one or more of the parameters, for example, π⁒(ΞΈj|𝐱). Then ΞΈj(1),ΞΈj(2),…,ΞΈj(n) represents a sample from π⁒(ΞΈj|𝐱). That is, we automatically get samples from the marginal distributions of parameters from the Gibbs sampler.

Secondly, whilst it will often be the case that univariate conditional distributions are used to construct the Gibbs sampler, it is possible to update more than one parameter together in a block by utilising multi-dimensional conditional distributions. That is, updating a subset p of the d parameters conditional upon the data and the other d-p parameters. This is particularly useful in regression and similar examples which give rise to multivariate Gaussian distributions as conditional distributions.

2.9 A Poisson count change point problem

We shall use this example to further illustrate the Gibbs sampler in action. The data to be analysed are obtained from Jarrett (1979) and is concerned with the number of coal mining disasters per year, over the period 1851-1962.

A plot of the data is given in the figure below, and suggests that there has been a reduction in the number of disasters per year over the period. The following model was suggested by Carlin et al.Β (1992):

Yi ∼ P⁒o⁒(ΞΈ);i=1,…,k;
Yi ∼ P⁒o⁒(Ξ»);i=k+1,…,n(=112).

That is, the number of disasters per year is Poisson distributed. Moreover, the first k years have a common mean ΞΈ and the remaining n-k years have a common mean Ξ». This is a standard change point problem, in that, there is a behaviour change in the model (here a change in mean) at an unknown point in time.

Unnumbered Figure:Β Link

We complete the specification of the model by assuming the following prior structure. Let θ∼Gamma⁒(a1,b1) and λ∼Gamma⁒(a2,b2), k discrete uniform over {1,2,…,112}, each independent of one another (ie. the parameters are a priori independent) with known hyperparameters a1,a2,b1 and b2. More complicated hierarchical models have previously been used to analyse these data, see Carlin et al.Β (1992).

Therefore

L⁒(ΞΈ,Ξ»,k;𝐲) = ∏i=1kΞΈyiyi!⁒e-ΞΈΓ—βˆi=k+1nΞ»yiyi!⁒e-Ξ» (2.3)
∝ ΞΈβˆ‘i=1kyi⁒e-kβ’ΞΈβ’Ξ»βˆ‘i=k+1nyi⁒e-(n-k)⁒λ
= (ΞΈΞ»)K⁒ek⁒(Ξ»-ΞΈ)β’Ξ»βˆ‘i=1nyi⁒e-n⁒λ, (2.4)

where K=βˆ‘i=1kyi, and

π⁒(ΞΈ,Ξ»,k|𝐲) ∝ L⁒(ΞΈ,Ξ»,k;𝐲)×π⁒(ΞΈ)×π⁒(Ξ»)×π⁒(k) (2.5)
∝ L⁒(ΞΈ,Ξ»,k;𝐲)Γ—ΞΈa1-1⁒e-b1⁒θ×λa2-1⁒e-b2⁒λ×1112. (2.6)

The conditional distributions of ΞΈ, Ξ» and k, can be obtained from (2.11-2.5) by focussing upon only those terms involving the parameter of interest. Note that k takes discrete values in the range {1,2,…,112}.

The conditional distributions are given as follows:

  1. 1.

    Using (2.3) and (2.5), we have that

    π⁒(ΞΈ|Ξ»,k,𝐲) ∝ ΞΈβˆ‘i=1kyi⁒e-k⁒θ×θa1-1⁒e-b1⁒θ (2.7)
    = ΞΈβˆ‘i=1kyi+a1-1⁒e-(k+b1)⁒θ.

    Noting that this is the kernel of a Gamma distribution, see the observation in Section 2.4,

    ΞΈ|𝐲,Ξ»,k ∼ Gamma⁒(a1+βˆ‘i=1kyi,k+b1).
  2. 2.

    Using (2.3) and (2.5), we have that

    π⁒(Ξ»|ΞΈ,k,𝐲) ∝ Ξ»βˆ‘i=k+1nyi⁒e-(n-k)⁒λ×λa2-1⁒e-b2⁒λ (2.8)
    = Ξ»βˆ‘i=1kyi+a1-1⁒e-(k+b1)⁒θ.

    Noting that this is the kernel of a Gamma distribution, see the observation in Section 2.4,

    ΞΈ|𝐲,Ξ»,k ∼ Gamma⁒(a2+βˆ‘i=k+1nyi,(n-k)+b2).
  3. 3.

    Using (2.4) and (2.5), we have that

    p⁒(k|𝐲,θ,λ)∝exp⁑{k⁒(λ-θ)}⁒(θ/λ)K=Q⁒(𝐲;k,θ,λ), say. (2.9)

    Now k does not have a nice conditional distribution, by which a mean an easily recognisable, well known distribution. However, k is a discrete distribution and there are only 112 possible values. Therefore we can compute the normalising constant directly to give

    p⁒(k|𝐲,ΞΈ,Ξ»)=Q⁒(𝐲;k,ΞΈ,Ξ»)βˆ‘l=1112Q⁒(𝐲;l,ΞΈ,Ξ»). (2.10)

    Sampling from the conditional distribution is straightforward in R using the sample command.

The Gibbs sampler was applied to the coal mining data set and run for 1100 iterations to obtain samples from (ΞΈ,Ξ»,k). We set a1=a2=b1=b2=1 for the priors. The results of the Gibbs sampler are shown in the figures below.

Unnumbered Figure:Β Link

Unnumbered Figure:Β Link

Unnumbered Figure:Β Link

Convergence of the algorithm seems to be rapid (after compensating for a poor choice of starting values, namely, k=100). Therefore I deleted the first 100 values and based all the analysis on the remaining 1000 iterations. In other words, I took the first 100 iterations as burn-in.

A histogram of the posterior distribution of year (k+1850) is given below. The posterior mode is at k=41, and there is very strong evidence to suggest that the change point occurs about the posterior mode which corresponds to the year 1891.

Unnumbered Figure:Β Link

It is clear from the trace plots above and kernel density estimates for the posterior distributions that λ is (almost certainly) less than θ. This corresponds to less disasters, and hence, a safer working environment after the change point. Note that any summary statistic or distributional quantity related to the joint distribution of λ and θ such as the distribution θ-λ can be estimated using the output from the Gibbs sampler. This is simply done by looking at each pair of realisations (θ,λ)i giving (θ-λ)i as the it⁒h realisation from the posterior distribution of the difference.

Unnumbered Figure:Β Link

Unnumbered Figure:Β Link

Unnumbered Figure:Β Link

We can easily sample from the predictive distribution. Let z denote a future observation. Future observations are iid according to Po⁒(Ξ») and does not depend upon 𝐲, k or ΞΈ given Ξ». Therefore the predictive distribution of z given 𝐲, π⁒(z|𝐲) satisfies

π⁒(z|𝐲)=βˆ«Ο€β’(z|Ξ»)⁒π⁒(Ξ»|𝐲)⁒𝑑λ.

Samples from π⁒(Ξ»|𝐲) are given by the Gibbs sampler in the form of Ξ»(1),Ξ»(2),…,Ξ»(N). Therefore we can estimate π⁒(z|𝐲) using the Monte-Carlo estimate

π⁒(z|𝐲)^ = 1Nβ’βˆ‘i=1Nπ⁒(z|Ξ»(i)) (2.11)
= 1Nβ’βˆ‘i=1N(Ξ»(i))zz!⁒exp⁑(-Ξ»(i)),

or simply simulate zi|Ξ»(i)∼Po⁒(Ξ»(i)) and estimate π⁒(z|𝐲) by

p⁒(z|𝐲)^=1Nβ’βˆ‘i=1N1{zi=z}. (2.12)

The estimate in (2.11) has smaller variance, ie.Β is better. However, (2.12) is easier to compute and only requires the ability to simulate from the distribution.

For the coal mining data set, we can use the estimator to predict the number of disasters in future years (Poisson with rate Ξ»). A graph of the predictive distribution alongside the corresponding estimates based on the posterior mean (π⁒(z|λ¯)) are given for a sample of 1000 realisations in the figure below. Since there is limited variation in the posterior distribution of Ξ», the two distributions are virtually identical. However, in general the predictive distribution will demonstrate greater variability owing to the uncertainty in Ξ».

Unnumbered Figure:Β Link

Predictive and estimative (based on π⁒(z|λ¯)) distributions of number of coal mining disasters per year (solid line is predictive)

2.10 R practice questions

R code for applying the Gibbs sampler to generate samples from a bivariate normal distribution and the outline of a Gibbs sampler algorithm for the coal mining data.

  1. 1.

    Bivariate Normal
    The program biv (in biv normR.R) generates values, using the Gibbs sampler, from the bivariate normal distribution:-

    (X1,X2)∼N⁒((0,0),(1ρρ1)).

    Can you modify the code to produce samples from the bivariate normal distribution distribution:-

    (Y1,Y2)∼N⁒((ΞΌ1,ΞΌ2),(Οƒ12ρ⁒σ1⁒σ2ρ⁒σ1⁒σ2Οƒ22)),

    given that

    Y1|Y2=y2 ∼ N⁒(ΞΌ1+Οƒ1⁒ρ⁒(y2-ΞΌ2)/Οƒ2,(1-ρ2)⁒σ12)
    Y2|Y1=y1 ∼ N⁒(ΞΌ2+Οƒ2⁒ρ⁒(y1-ΞΌ1)/Οƒ1,(1-ρ2)⁒σ22).
  2. 2.

    Coal Mining Example
    Complete the Gibbs sampling algorithm for the coal mining data (in coal outlineR.R).

    Remember:-

    ΞΈ|𝐲,Ξ»,k ∼ G⁒a⁒m⁒m⁒a⁒(a1+βˆ‘i=1kyi,k+b1)
    Ξ»|𝐲,ΞΈ,k ∼ G⁒a⁒m⁒m⁒a⁒(a2+βˆ‘i=k+1nyi,n-k+b2)
    p⁒(k|𝐲,ΞΈ,Ξ») = Q⁒(𝐲;k,ΞΈ,Ξ»)/βˆ‘j=1nQ⁒(𝐲;j,ΞΈ,Ξ»),

    where

    Q⁒(𝐲;k,θ,λ)=exp⁑{k⁒(λ-θ)}⁒(θ/λ)K.

    Hint: To sample k use the sample command in R.

    Apply the algorithm to the coal mining data.

2.11 Lab Session: Multiple regression using the Gibbs Sampler

The aim is to analyse the dataset in the file labhills.txt. The file contains record times (in minutes) for 35 Scottish hill races, together with the length of the race (in miles) and elevation (in feet). The objective is to model the winning times in terms of distances and climbs of the races.

Examine the data file

> hills <- read.table("labhills.txt")
> names(hills)

Now look at the 𝐑 data frame called hills what are the names of the variables in the data frame. A data frame is just a matrix containing data, and each element in the matrix can be accessed as one usually does with matrices. To refer to the first variable in the data frame (that is the first column in the data frame), you can either say hills[,1] or call it by its name hills$dist. Similarly for the other variables in the data frame. If you want to see the entire data frame do

> hills

Next look at the file gibbs_normalR.r. This contains a Gibbs sampler for the normal model that was described in the lectures:-

Yi∼N⁒(ΞΌ,1/Ο„),μ∼N⁒(ΞΌ0,1/Ο‰0),Ο„βˆΌGam⁒(Ξ±0,Ξ³0).

Check that you understand what the code is doing. (There is code at the bottom of the file to simulate data and run the algorithm.) Apply the gibbs sampler to the times of the hill races with ΞΌ0=0, Ο‰0=0.01 and Ξ±0=Ξ³0=1. Why is Ο„ so small? Hint look at var(y).

The first aim in this lab is for you to extend and alter the code to produce a Gibbs sampler for a linear regression model.

A good starting point is Naismith’s rule, which is used to calculate the length of time a hillwalk should take. You divide the total distance by your average speed on the flat and then add on an allowance for each 100ft of ascent. The form of such a model is:

𝔼⁒[yi|𝜷]=𝐱iT⁒𝜷=xi⁒1⁒β1+xi⁒2⁒β2, (2.13)

where yi, xi⁒1, xi⁒2 respectively denote time, distance and climb for the it⁒h race, and 𝜷=(Ξ²1,Ξ²2) is a parameter. Allowing for a Normal error distribution, we obtain that the observations y1,…,yn are independent, distributed according to

yi|𝜷,Ο„βˆΌN⁒(𝐱iT⁒𝜷,1/Ο„). (2.14)

To construct a Bayesian model, you also need to specify a prior distribution for the parameters 𝜷 and Ο„. Take these parameters independent a priori with

𝜷=(Ξ²1Ξ²2)∼N2(𝝁0,𝐂0=Diag(1Ο‰01,1Ο‰02)) andβ€ƒΟ„βˆΌGamma(Ξ±0,Ξ³0). (2.15)

In other words, the prior for 𝜷 is bi-variate Normal with mean vector 𝝁0βˆˆβ„2 and covariance matrix 𝐂0. Since the covariance matrix is diagonal, Ξ²1 and Ξ²2 are independent a priori. Ο„ is assigned a Gamma prior distribution.

  1. 1.

    Write down, up to a proportionality constant, the joint posterior density of (𝜷,Ο„).

  2. 2.

    Write down, up to a proportionality constant, the posterior density of Ο„, conditioning on 𝜷; that is π⁒(Ο„|y1,…,yn,𝜷).

    Thus obtain the conditional distribution, Ο„|y1,…,yn,𝜷.

  3. 3.

    Write down, up to a proportionality constant, the posterior density of Ξ²1, conditioning on Ο„ and Ξ²2; that is π⁒(Ξ²1|𝐲,Ο„,Ξ²2).

    Thus obtain the conditional distribution, Ξ²1|𝐲,Ο„,Ξ²2.

  4. 4.

    Write down, up to a proportionality constant, the posterior density of Ξ²2, conditioning on Ο„ and Ξ²1; that is π⁒(Ξ²2|𝐲,Ο„,Ξ²1).

    Thus obtain the conditional distribution, Ξ²2|𝐲,Ο„,Ξ²1.

  5. 5.

    Using the conditional posterior distributions in the previous question, write down on paper a Gibbs sampling algorithm to compute the posterior distribution of (𝜷,Ο„) via simulation.

  6. 6.

    Copy, rename and alter the function in gibbs_normalR.r to create your Gibbs sampler in 𝐑.

    Choose hyperparameters for the prior distribution in such a way that the prior does not contain a lot of information (ie. choose moderately large prior variances for the parameters but do not choose a shape parameter for the gamma prior below 1). Also, in the absence of prior information to the contrary, it seems natural to choose ΞΌ01=ΞΌ02 and Ο‰01=Ο‰02 in the prior distribution for 𝜷.

    Run the program for the hill races dataset.

    1. (a)

      Choosing a suitable burn-in period and a suitable number of draws to be used for inference.

      Choosing 5000 draws and discarding the first 1000 as burn-in is reasonable.

    2. (b)

      Display histograms of the marginal posterior distributions of Ξ²1, Ξ²2 and Ο„.

    3. (c)

      Compute the following numerical summaries of their posterior distributions: minimum, 1st quartile, median, mean, 3rd quartile, maximum.

    4. (d)

      Display a scatterplot of the joint posterior distribution of (Ξ²1,Ξ²2).

    5. (e)

      What are your conclusions in terms of the effect of distance and climb on the winning time of races?

  7. 7.

    Some of the observations may be regarded as outliers, i.e. not fitted well by the assumed model. A way to try to detect outliers is to see whether any of the observations yi is particularly far from the value predicted by the model. For a race of distance xi⁒1 and climb xi⁒2, a natural prediction of its winning time would be

    y^i=𝔼⁒[Ξ²1|𝐲]⁒xi⁒1+𝔼⁒[Ξ²2|𝐲]⁒xi⁒2.

    For each of the races, compute yi-y^i and plot the resulting numbers. Are there any races that appear out of line?


Block update in Gibbs sampler

The above Gibbs sampler alternates between:

  1. 1.

    Updating Ξ²1|Ξ²2,Ο„.

  2. 2.

    Updating Ξ²2|Ξ²1,Ο„.

  3. 3.

    Updating Ο„|Ξ²1,Ξ²2.

This algorithm works well on the above data set. However, a more efficient Gibbs sampler can be obtained by updating 𝜷 as a block. That is, a Gibbs sampler which alternates between:

  1. 1.

    Updating 𝜷=(Ξ²1,Ξ²2)|Ο„.

  2. 2.

    Updating Ο„|𝜷=(Ξ²1,Ξ²2).

The update of Ο„ is identical in both algorithm. For the block update of 𝜷|Ο„, it is fairly straightforward but algebraically tedious to show that:

𝜷|Ο„βˆΌ(𝝁1,C1)

where the prior on 𝜷 is N⁒(𝝁0,C0) and

𝐂1=(𝐂0-1+τ⁒𝐗T⁒𝐗)-1⁒and⁒𝝁1=𝐂1⁒(𝐂0-1⁒𝝁0+τ⁒𝐗T⁒𝐲).

The parameters of the conditional posterior distribution of 𝜷 are a weighted average of the prior and data. Note the similarity to the conditional posterior distribution of μ in the univariate Gaussian example.

Note: I won’t expect you to compute multivariate conditional distributions for coursework or the exam.

2.12 Practice Questions

  1. 1.

    Using detailed balance, construct a 4Γ—4 transition matrix P with P1,4=0 and P1,3=1/2 for a Markov chain with stationary distribution 𝝅=(1/5,1/15,2/5,1/3).

    Remember for detailed balance, for all i,j:-

    Ο€i⁒Pi⁒j=Ο€j⁒Pj⁒i.
  2. 2.

    Let x=3 denote an observation from X∼Bin⁒(n,p), where both n and p are unknown parameters.

    Suppose that Beta⁒(2,2) and discrete uniform on {1,2,…,10} priors are assigned to p and n, respectively. That is,

    π⁒(p) = 6p(1-p)    (0≀p≀1)
    π⁒(n) = 110    (n=1,2,…,10)
    1. (a)

      Write down the likelihood (Ο€(x=3|n,p)).

    2. (b)

      Write down, up to a constant of proportionality, the joint posterior distribution of p and n, Ο€(p,n|x=3).

    3. (c)

      Find the conditional distribution of p given n and x=3, Ο€(p|n,x=3).

    4. (d)

      Find the conditional distribution of n given p and x=3, Ο€(n|p,x=3).

    5. (e)

      Describe a Gibbs sampler for obtaining samples from Ο€(p,n|x=3).

  3. 3.

    Consider a two-state discrete time Markov process (states 1 and 2). For t=1,2,…, let Xt denote the state of the Markov process at time t. The process is observed from time 1 to time n with observed data 𝐱=(x1,x2,…,xn).

    There is a (potential) change-point, Ο„, in the data, in that, for t=1,2,…,Ο„,

    P(Xt+1=1|Xt=1)=P(Xt+1=2|Xt=2) = p
    P(Xt+1=2|Xt=1)=P(Xt+1=1|Xt=2) = 1-p

    and for t=Ο„+1,Ο„+2,…,

    P(Xt+1=1|Xt=1)=P(Xt+1=2|Xt=2) = q
    P(Xt+1=2|Xt=1)=P(Xt+1=1|Xt=2) = 1-q,

    where p and q are unknown parameters to be estimated. Throughout assume U⁒(0,1) prior on p and q. For Ο„, unless specified otherwise, denote the prior probability for Ο„=k by f⁒(k).

    1. (a)

      Given that Ο„=k, write down the likelihood of the parameters (p,q) and compute the marginal posterior distributions of Ο€(p|𝐱,q,Ο„=k) and Ο€(q|𝐱,p,Ο„=k)
      Hint: Note that, regardless of the value of p, P(X1=1)=P(X1=2)=1/2.

    2. (b)

      Suppose that the location of the change-point is at an unknown location Ο„. Find an expression for P(Ο„=k|𝐱,p,q) (k=1,2,…,n-1).

    3. (c)

      Outline a Gibbs sampler algorithm to obtain samples from the joint posterior distribution of (p,q,k).

  4. 4.

    Tougher question

    For the Gibbs sampler there is inherent dependence between successive realisations of the parameters. This is generally the case for all MCMC algorithms. The dependence between parameters can have an important impact on the convergence/performance of the MCMC chain (the chain of realisations of the MCMC output) to the posterior distribution.

    Bivariate Normal
    Suppose that 𝐗=(X1,X2) is a bivariate normal with standard normals as marginal distributions for X1 and X2 and 𝔼⁒[X1⁒X2]=ρ. (This is the bivariate normal example for which the Gibbs sampler code is provided.)

    Then X has pdf

    f⁒(x1,x2)=12⁒π⁒1-ρ2⁒exp⁑(-12⁒(1-ρ2)⁒{x12-2⁒ρ⁒x1⁒x2+x22}).

    That is, Xk∼N⁒(0,1) (k=1,2) and conditional distributions,

    X1|X2∼N⁒(ρ⁒X2,1-ρ2) (2.16)

    and

    X2|X1∼N⁒(ρ⁒X1,1-ρ2). (2.17)

    Let 𝐗(i)=(X1(i),X2(i)) denote the value of 𝐗 obtained from the it⁒h iteration of the Gibbs sampler, which alternates between (2.16) and (2.17).

    1. (a)

      Find the distribution of X1(2) given that X1(1)=x.

    2. (b)

      Find the distribution of X1(2) given that X1(1)∼N⁒(0,1).

    3. (c)

      Find the correlation between X1(1) and X1(2) given that X1(1)∼N⁒(0,1).