1 Week 6: Expectation-Maximisation (EM) algorithm

1.1 Mixture model - motivation

We motivate the Expectation-Maximisation (EM) algorithm with a classical 19th century problem!

The first attempt at mixture modelling was by Pearson in 1894 when he was presented by the biologist W.F.R. Weldon with 1000 observations on crabs found in Naples, see [3]. The data are ratios of forehead width to body length, a frequency table is available at
www.maths.uq.edu.au/~gjm/DATA/Crab.dat.

The following graph shows a histogram of the data together with a mixture of two Gaussians fitted.

Unnumbered Figure: Link

The red line gives the mixture density obtained using the EM algorithm and the green lines show the individual Gaussian densities, multiplied by the appropriate weighting. Hence the red line is the sum of the two green lines.
See www.math.mcmaster.ca/peter/mix/demex/excrabs.html for more details.

We shall consider the two-group case although the ideas readily extend to more than two groups, and from mixtures of Gaussian to other distributions.

If a random variable X is drawn from N(μ1,σ12) with probability α1 and from N(μ2,σ22) with probability α2, α1+α2=1, then the pdf of X is

f(x)=α1ϕ(x|μ1,σ12)+α2ϕ(x|μ2,σ22)

where ϕ(x;μ,σ2) is the pdf of N(μ,σ2). This pdf is called a Gaussian mixture with two components.

Proof. Let Y=j if X come from N(μj,σj2), j=1,2. By definition, the cdf of X is

F(x) = P(Xx)
= P(Xx|Y=1)P(Y=1)+P(Xx|Y=2)P(Y=2)
= -xϕ(x;μ1,σ12)dx×α1+-xϕ(x;μ2,σ22)dx×α2

Thus the pdf

f(x)=F(x)=ϕ(x|μ1,σ12)×α1+ϕ(x|μ2,σ22)×α2

Given a random sample x1,,xn from this distribution, the log-likelihood function is

(θ;𝐱) = i=1nlog{α1ϕ(xi|μ1,σ12)+α2ϕ(xi|μ2,σ22)}
= i=1nlog{α12πσ1e-12(xi-μ1σ1)2+α22πσ2e-12(xi-μ2σ2)2}

which cannot be maximised easily. Here 𝐱=(x1,,xn) and θ=(α1,α2,μ1,μ2,σ12,σ22).

However, with additional information y1,,yn on which distribution each xi comes from, the likelihood function becomes

L(θ;𝐱,𝐲) = i=1n(Xi=xi,Yi=yi|θ)
= i=1n(Xi=xi|Yi=yi,θ)(Yi=yi|θ)
= i=1n{12πσyiexp(-12(xi-μyiσyi)2)×αyi}.

Thus the log-likelihood function becomes

(θ;𝐱,𝐲) = i=1nlog{12πσyie-12(xi-μyiσyi)2×αyi}
= -n2log(2π)-12i=1nlog(σyi2)-12i=1n(xi-μyiσyi)2+i=1nlogαyi
= -n2log(2π)-12log(σ12)i=1n1{yi=1}-12log(σ22)i=1n1{yi=2}
-12i=1n(xi-μ1σ1)21{yi=1}-12i=1n(xi-μ2σ2)21{yi=2}
+logα1i=1n1{yi=1}+logα2i=1n1{yi=2}

which is simpler and can be maximised as follows with ‘obvious’ solutions.

Setting the partial derivatives with respect to μ1 and σ12 to zero

μ1(θ;𝐱,𝐲)=-12i=1n2(xi-μ1σ1)-1σ11{yi=1}=0,
σ12(θ;𝐱,𝐲)=-121σ12i=1n1{yi=1}+121σ14i=1n(xi-μ1)21{yi=1}=0,

gives

μ^1=i=1nxi1{yi=1}i=1n1{yi=1},σ^12=i=1n(xi-μ^1)21{yi=1}i=1n1{yi=1}

which are respectively the sample mean and sample variance of the sample from N(μ1,σ12). Similarly,

μ^2=i=1nxi1{yi=2}i=1n1{yi=2},σ^22=i=1n(xi-μ^2)21{yi=2}i=1n1{yi=2}

The rest of the maximisation uses the Lagrange multiplier method

logα1i=1n1{yi=1}+logα2i=1n1{yi=2}-λ(α1+α2-1)

with constraint α1+α2=1.

1α1i=1n1{yi=1}-λ=0,1α2i=1n1{yi=2}-λ=0
α^1=1ni=1n1{yi=1},α^2=1ni=1n1{yi=2}

which are the sample proportions from each population.

However, the data 𝐲 are not available in practice. Therefore how can we make use of the above maximum likelihood estimator for θ given 𝐱 and 𝐲 when the only data available are 𝐱. This is where the EM-algorithm comes into play. We iterate between the M-step (MLE) above computing θ^ based on the full data (𝐱 and 𝐲) and an E-step which estimates the unobserved 𝐲 given the data 𝐱 and current parameter estimates θ.

1.2 Informal EM algorithm for the mixture model

Before formally introducing the EM algorithm, we outline the procedure for the mixture model.

To initialise, we need to choose initial estimates for the parameters of the model. The key thing is to choose μ1μ2, and for the parameters to be reasonable. For example, the data ranges from 0.57 to 0.7, therefore I choose μ1=0.6 and μ2=0.65 (plausible values in the range) with σ12=σ22=0.0004 (close to the variance of the data 0.000364). Initially I have no preference for either of the two Gaussian distributions making up the mixture, so take α1=α2=0.5.

E-Step
The augmented data that we want are 𝐲, the mixture component (Gaussian distribution) to which each of the observations 𝐱 belongs. Given θ=(μ1,μ2,σ1,σ2,α1,α2) and 𝐱, we can compute the probability (expectation) of the observations coming from each of the two distributions.

Since the observations are assumed to be independent,

(𝐘=𝐲|θ,𝐱)=i=1n(Yi=yi|θ,xi).

By Bayes’ theorem, for i=1,2,,n,

𝔼[1{Yi=1}|θ,xi]=(Yi=1|θ,xi) = (Xi=xi,Yi=1|θ)(Xi=xi|θ)
= (Xi=xi,Yi=1|θ)(Xi=xi,Yi=1|θ)+(Xi=xi,Yi=2|θ)
= α1/(2πσ1)exp{-(xi-μ1)2/(2σ12)}α1/(2πσ1)exp{-(xi-μ1)2/(2σ12)}+α2/(2πσ2)exp{-(xi-μ2)2/(2σ22)},

with 𝔼[1{Yi=2}|θ,xi]=1-𝔼[1{Yi=1}|θ,xi].

Thus in the E-Step we compute the expectation of 𝔼[1{Yi=1}|θ,xi], which we shall call pi (with 𝔼[1{Yi=2}|θ,xi]=1-pi).

M-Step
We can now plug in the expected values for 1{yi=1} and 1{yi=2}, obtained in the E-step into the full data MLEs derived above to update our estimates of θ. Specifically,

μ1=i=1nxipii=1npiμ2=i=1nxi(1-pi)i=1n(1-pi)σ12=i=1n(xi-μ1)2pii=1npiσ22=i=1n(xi-μ1)2(1-pi)i=1n(1-pi)α1=1ni=1npiα2=1ni=1n(1-pi)=1-α1.

The algorithm then iterates between the two steps (E and M). Updating the probability of each observation belonging to each of the two Gaussian distributions and then updating the estimates (via MLEs) of the parameters. The process stops when we have convergence. That is, two successive estimates of the parameters agree to some pre-defined precision.

For the Naples bay crab data I required successive values of α1 to differ by less than 10-4. This meant that with my chosen starting values 467 iterations were required (taking 0.22 seconds in R!), resulting in μ1=0.6319, μ2=0.6550, σ12=0.000329, σ22=0.000155, α1=0.444 and α2=0.556.

1.3 The EM algorithm

The EM algorithm is an iterative algorithm, introduced in [2], and is designed to compute the maximum likelihood estimate when there is missing data. It is iterative in that it consists of a series of steps called iterations, where the parameter value gets repeatedly updated, until a convergence criterion is met. The algorithm converges to a local maximum of the likelihood function (ie. not necessarily where the global maximum is). Thus if the likelihood function is unimodal the EM algorithm will converge to the maximum likelihood estimate (MLE).

The EM algorithm is primarily used for maximising the likelihood when the task becomes easier given more information associated with existing data. This situation is called an incomplete data problem because we do not have this extra information. A prime example is mixture distributions such as that given above. In the Naples crab example, if we knew from which of the two Gaussian distributions each observation comes parameter estimation is straightforward.

The EM algorithm gets its name from the two steps involved in each iteration. The E-step (expectation) and the M-step (maximisation). The procedure alternates between these two steps (so it goes E, M, E, M,…) until convergence is achieved (successive values are sufficiently close).

Let x represent our observed data and θ the parameter(s) of interest. (In the Naples crab example, x represents the crab ratios and θ represents the parameters of the two Gaussian distributions and the mixture proportions.) By choosing a suitable parametric model we can write down the likelihood function L(θ;x) and our aim is then to find the value of θ that maximises L(θ;x) or equivalently (θ;x)=log(L(θ;x)), i.e. find the MLE (maximum likelihood estimate) of θ. Suppose that it is possible to think of some additional data y, such that the ‘complete data’ log-likelihood (θ;x,y) is of a simpler form than the ‘incomplete data’ log-likelihood (θ;x) and thus easier to maximise. (In the Naples crab example, y represents from which of the two Gaussian distributions each observation comes.) The log-likelihood of x and y is preferred but y is not available, so we estimate the log-likelihood by taking expectation with respect to the available data x in such a way that the estimated log-likelihood function remains easier to estimate. However, the expectation requires the value of θ, thus it can only be done iteratively, using the current value of θ. These are the key ideas behind the EM algorithm. Starting with an initial value for θ, we

  1. 1.

    Estimate the log-likelihood (θ;x,y) using the current value of θ by taking expectation conditioning on x.

  2. 2.

    Maximise the estimated log-likelihood with respect to θ to get an updated value for θ.

  3. 3.

    Continue steps 1 and 2 until convergence is achieved.

Note that the above is given in terms of the log-likelihood because it is often easier to work with than the likelihood, of the complete data x and y.

More specifically, the procedure is as follows. Let θ* denote the current estimate of θ, we define the function

Q(θ,θ*)=(θ;x,y)f(y|x;θ*)dy

that is, in the expectation step,

Q(θ,θ*)=Eθ*[(θ;x,Y)|X=x].

The expectation is with respect to the conditional distribution of Y given the observed data x using the current estimate θ* of θ, ie. f(y|x;θ*). Note that Q(θ,θ*) is the expectation of the complete data log-likelihood function taking the unavailable data Y as random. The E- and M- steps can then formally be specified as:

  1. 1.

    The E-step Calculation of (key elements of ) Q(θ,θ*) as a function of θ;
    (Determination of Q(θ,θ*) by taking expectation might sound daunting but it is often fairly straightforward. Only the key elements of Q are calculated, so that it can be maximised in the M-step.)

  2. 2.

    The M-step Maximisation of Q(θ,θ*) with respect to θ to get θ, which becomes θ* in the next iteration.
    (Maximisation of Q(θ,θ*) should be much easier than maximising (θ;x), often with an explicit solution.)

The EM algorithm can be interpreted as ’data augmentation’ when the complete data log-likelihood is linear in the missing data y, which is estimated in the E-step and substituted in the M-step. That is, rather than the somewhat off-putting Q(θ,θ*), the E-step reduces down to compute E[y|x,θ] and this is then substituted in for y in the M-step. In the Naples crab example, this is not quite the case but we simply computed the probability that each observation belongs to each of the two Gaussian distributions and substitute this via 𝔼[1{yi=1}|xi,θ]=(yi=1|xi,θ) into the MLE. In other words, we can formally show that the straightforward procedure we employed for the Naples crab data is in fact an implementation of the EM algorithm.

1.4 How does the EM algorithm work?

The EM algorithm is therefore a deterministic algorithm that alternates between the expectation and maximisation steps. Even though it works with the (estimated) complete data log-likelihood, it actually increases the incomplete data log-likelihood (θ;x) in each iteration. Let us prove this step by step.

First, we give an overview of the notation. Below we assume the data are continuous for simplicity of exposition but the same arguments (with minor modifications) apply to discrete data or a mixture of continuous and discrete data. Note that the Naples crab data is a mixture of continuous and discrete data, the crab ratios 𝐱 are continuous, whilst the allocation variable 𝐲 are discrete. The full data likelihood is f(x,y|θ), the probability density function of the observed and augmented data given parameters θ. The observed data likelihood is f(x|θ)=L(θ;x)=yf(x,y|θ)𝑑y. Finally, f(y|x,θ) is the conditional distribution of y given x and θ.

Because we maximise Q(θ,θ*) as a function of its first argument to get θ**,

Q(θ**,θ*)-Q(θ*,θ*) 0
log(f(x,y|θ**))f(y|x,θ*)dy-log(f(x,y|θ*))f(y|x,θ*)dy 0
log(f(x,y|θ**)f(x,y|θ*))f(y|x,θ*)dy 0. (1.1)

Since f(x,y|θ)=f(y|x,θ)f(x|θ), we can write

log(f(y|x,θ**)f(y|x,θ*))f(y|x,θ*)dy+log(f(x|θ**)f(x|θ*))f(y|x,θ*)dy0. (1.2)

Using the inequality log(x)x-1, we can deduce that the first integral cannot be positive.

log(f(y|x,θ**)f(y|x,θ*))f(y|x,θ*)dy (f(y|x,θ**)f(y|x,θ*)-1)f(y|x,θ*)dy (1.3)
= f(y|x,θ**)dy-f(y|x,θ*)dy
= 0.

Therefore the second integral must be non-negative.

log(f(x|θ**)f(x|θ*))f(y|x,θ*)dy 0
log(f(x|θ**)f(x|θ*))f(y|x,θ*)dy 0
log(f(x|θ**))-log(f(x|θ*)) 0. (1.4)

Since f(x|θ)=L(θ;x), we have that (θ**;x)(θ*;x), the log-likelihood is increasing (or at the very least non-decreasing) from one iteration to the next.

We have shown that at each iteration of the EM algorithm the likelihood is increasing. Furthermore, it can be shown that if these iterations converge, then they converge to a stationary point of the likelihood function. Although this is necessarily a local maximum, unless the likelihood function is unimodal this is not necessarily a global maximum, ie. the MLE.

Advantages

  1. 1.

    It often has meaningful solutions and interpretation as data augmentation.

  2. 2.

    Convergence is easy to ‘detect’ since we wait until successive values of θ fall within a certain level of variation.

Disadvantages

  1. 1.

    It can be slow to converge.

  2. 2.

    Need to be able to compute the E-step – restricts applications. (This can be circumvented to some extent by using a Monte Carlo EM algorithm.)

1.5 Genetics example

This example actually originates from the paper by [2] which introduced the EM algorithm.

The data consist of the genetic linkage of 197 animals and the data are divided into four genetic categories, labeled 1 through to 4, see [4] and [2]. The probabilities that an animal belongs to each of the four categories are 12+θ4,1-θ4,1-θ4,θ4, respectively. Let 𝐱=(x1,x2,x3,x4)=(125,18,20,34) denote the total number of animals in each category. For example, the probability of belonging to category 1 is 12+θ4 and there are 125 observed animals in category 1. It is possible to maximise this multinomial likelihood directly and hence, obtain the MLE for θ without recourse to the EM algorithm. However, it is far simpler to apply the EM algorithm. To show that the EM algorithm works, we do both for this example.

MLE - observed data

The likelihood for θ given 𝐱 is

L(θ;𝐱) = (x1+x2+x3+x4)!x1!x2!x3!x4!(2+θ4)x1(1-θ4)x2+x3(θ4)x4 (1.5)
(2+θ)x1(1-θ)x2+x3θx4.

Therefore the log-likelihood satisfies

(θ;𝐱)=K+x1log(2+θ)+(x2+x3)log(1-θ)+x4log(θ), (1.6)

where K is a constant. Therefore

ddθ(θ;𝐱)=x12+θ-x2+x31-θ+x4θ. (1.7)

By setting ddθ(θ;𝐱)=0, we obtain the following quadratic (in θ) equation from (1.7),

x1(1-θ)θ-(x2+x3)(2+θ)θ+x4(2+θ)(1-θ) = 0
-{x1+x2+x3+x4}θ2+{x1-2(x2+x3)-x4}θ+2x4 = 0
-197θ2+15θ+68 = 0 (1.8)

This yields

θ=-15±152-4×68×-1972×-197=-0.5507 or 0.6268.

Since θ needs to lie between 0 and 1, we get θ=0.6268.

EM algorithm - full data

What extra information is going to make computation of the MLE simpler?

Suppose that we can choose y so that we have an augmented data likelihood of the form

L(θ;𝐱,y)=CθA(1-θ)B. (1.9)

Then

(θ;𝐱,y)=logC+Alogθ+Blog(1-θ)

with

ddθ(θ;𝐱,y)=Aθ-B1-θ.

Setting ddθ(θ;𝐱)=0 give the linear equation

A(1-θ)-Bθ=0.

It is then trivial to show that

θ=AA+B.

Note that likelihoods of the form (1.9) are common in statistics, for example, binomial (geometric, negative binomial) data.

What is stopping the genetic linkage data from having a likelihood of the form (1.9)? The first cell with probability 1/2+θ/4.

What if we break the probability 1/2+θ/4 into 1/2 and θ/4, a component independent of θ and a component proportional to θ? Below we detail how this can be done.

Suppose that the observed cell x1 could be divided into two subcategories A and B. Suppose that y (x1-y) is the number of animals is subcategory A (B) with cell probability θ/4 (1/2). This would give an augmented data set (𝐱,y).

The likelihood for θ given (𝐱,y), is

L(θ;𝐱,y) = (x1+x2+x3+x4)!y!(x1-y)!x2!x3!x4!(12)x1-yθy(1-θ)x2+x3θx4 (1.10)
(1-θ)x2+x3θy+x4

Hence, there exists a constant C such that

(θ;𝐱,y) = log(L(θ;𝐱,y)) (1.11)
= logC+(x2+x3)log(1-θ)+(y+x4)log(θ).

(This yields θ^=(y+x4)/(y+x2+x3+x4) using the observations after (1.9).)

E - step
From (1.11), we have that

Q(θ,θ*) = E[(θ;𝐱,y)]
= E[logC|θ*,𝐱]+E[(y+x4)log(θ)+(x2+x3)log(1-θ)|θ*,𝐱]
= K+(E[y|θ*,𝐱]+x4)log(θ)+(x2+x3)log(1-θ),

since (x2,x3,x4) are known, fixed constants and K does not depend on θ, so plays no role in the M-step (can be ignored). Therefore the only quantity that needs to be computed in the E-step is E[y|θ*,𝐱].

What is the distribution of y|θ*,𝐱?
There are 125 animals in category 1 which independently can be assigned to either category A or category B. The conditional probability of animal belonging to category A given that they belong to category 1 is simply

P(Category A and Category 1)P(Category 1)=P(Category A)P(Category 1)=θ*/4θ*/4+1/2=θ*θ*+2.

Thus y|θ*,𝐱Bin(125,θ*/(2+θ*)), and hence, E[y|θ*,𝐱]=125θ*/(2+θ*). This gives

Q(θ,θ*)=C+{125θ*2+θ*+34}log(θ)+38log(1-θ). (1.12)

M - step
The M-step is straightforward and simply involves finding ddθQ(θ,θ*)=0 (which we have effectively already done above). From (1.12), we have that

ddθQ(θ,θ*)={125θ*2+θ*+34}1θ-381-θ.

Hence θ solves

{125θ*2+θ*+34}1θ-381-θ=0,

which following the steps above yields

θ=125θ*/(2+θ*)+34125θ*/(2+θ*)+34+38.

For this example, we have that the maximum likelihood estimate (MLE) is θ=0.6268 to 4 decimal places. Starting with θ0=0.5 as the initial value for θ, successive iterations of the EM algorithm yields:

iθi10.500020.608230.624340.626550.626860.6268

1.6 Right censored data

The EM algorithm is particularly useful where we have censored data. In many experiments and medical trials, we have censored data. In a medical study, we could be interested in the time until death following major surgery. However, we may only be able to follow up patients for 2 years after surgery. That is, for a patient alive 2 years after surgery we don’t know when the death of the patient occurs, only that death occurs more than 2 years after surgery.

Suppose that X1,X2,,Xn are independent and identically distributed according to f(x|θ). However, assume that there is right censoring at the value a. That is, for all observations greater than a, the only information we have is that the observation exceeds a. The common notation is to denote censored values superscripted with an asterisk. Therefore, we have an incomplete sample (suitably rearranged)

𝐱=(x1,,xm,a*,,a*).

In this case, the likelihood based on all we know about x1,,xn is

L(θ;𝐱)={i=1mf(xi|θ)}{1-F(a|θ)}n-m,

where F is the cdf (cumulative distribution function) corresponding to f(x). Typically it is difficult to maximise L(θ;𝐱) because of the presence of the non-standard term {1-F(a|θ)}n-m. Let 𝐲=(xm+1,xm+2,,xn) represent the true but unobserved values. Then the pair (𝐱,𝐲) provides the complete data (x1,,xn) and

L(θ;𝐱,𝐲)=i=1nf(xi|θ)

which is usually straightforward to maximise.

Suppose that XGam(2,δ), where δ is unknown. Suppose that we have the following 15 observations with the data censored at a=2.5:

1.226,2.500,1.229,0.576,1.925
2.500,1.437,1.217,1.836,2.500
2.500,1.643,2.225,2.500,2.067

Note that the cdf of X is given by

F(x)=1-e-δx(1+δx)x>0,

whilst the pdf of X is given by f(x)=δ2xe-δx, x>0. We reorder the data, 𝐱, into ascending order, then

L(δ;𝐱)={i=110δ2xie-δxi}(e-2.5δ(1+2.5δ))5.

This is difficult to maximize.

Let yi denote the actual value of the ith observation. Then for i=1,2,,10, yi=xi and for i=11,,15, yi>xi.

Then

(δ;𝐱,𝐲) = i=115log(δ2yie-δyi)
= i=115{2log(δ)+log(yi)-δyi}
= 30log(δ)-δi=115yi+C,

where C does not depend on δ.

Hence,

ddδ(δ;𝐱,𝐲)=30δ-i=115yi

which when set equal to 0 solves to give δ^=2/y¯, where y¯ is the sample mean.

Now conditioning on what we know about x1,,xn,

E[(δ;𝐱,𝐘)|𝐱]=30log(δ)-δ(x1++x10+5x)+C

where x=𝔼[X|X>a] using the conditional density of X given X>a. This leads to

Q(δ,δ*)=30log(δ)-δ(x1++x10+5x)+const

which is maximised at

δ**=30x1++x10+5x.

where

x*=2+2δ*a+δ*2a2δ*(1+δ*a).

The EM algorithm consists of calculation of x* in the E-step and δ** in the M-step, which are repeated until convergence.

For the given data, x1++x10=15.381 and the EM algorithm converges to 0.8387.

Note that

x = 𝔼[X|X>a]
= -xfX(x|x>a)𝑑x.

Now

fX(x|x>a)=1{x>a}fX(x)(X>a),

where (X>a)=1-F(a). Therefore

x = axfX(x)1-F(a)𝑑x
= 11-F(a)axf(x)𝑑x
= 1(1+δa)exp(-δa)axδ2xexp(-δx)𝑑x
= 1(1+δa)exp(-δa)×1δ(2+2δa+δ2a2)exp(-δa)
= 2+2δa+δ2a2δ(1+δa).

1.7 Standard errors

As usual in maximum likelihood estimation, the calculation of standard errors requires the calculation of the inverse Hessian matrix (Fisher information):

[-2logf(𝐱|θ)θiθj]-1 (1.13)

evaluated at the MLE θ^. For missing data problems this is usually difficult to evaluate directly. However, the structure of the EM algorithm can be exploited to assist in the calculations. Although the methodology can be applied to a vector θ, we will restrict attention to the case where θ is a single parameter. In this case, the standard error of the estimator θ^ is

1/d2dθ2logf(𝐱|θ) (1.14)

First note that f(𝐱,𝐲|θ)=f(𝐱|θ)f(𝐲|𝐱,θ), thus

f(𝐱|θ) = f(𝐱,𝐲|θ)/f(𝐲|𝐱,θ),
-logf(𝐱|θ) = -logf(𝐱,𝐲|θ)-{-logf(𝐲|𝐱,θ)}
-d2dθ2logf(𝐱|θ) = -d2dθ2logf(𝐱,𝐲|θ)-{-d2dθ2logf(𝐲|𝐱,θ)}. (1.15)

The equation (1.15) forms the basis for the missing information principle:

Observed Information = Complete Information - Missing information

Multiplying both sides of (1.15) by f(𝐲|𝐱,ϕ) and integrate with respect to y, we get (for any ϕ)

-d2dθ2logf(𝐱|θ)f(𝐲|𝐱,ϕ)dy = -d2dθ2logf(𝐱,𝐲|θ)f(𝐲|𝐱,ϕ)dy-{-d2dθ2logf(𝐲|𝐱,θ)}f(𝐲|𝐱,ϕ)𝑑y
-d2dθ2logf(𝐱|θ) = -2Q(θ,ϕ)θ2--2H(θ,ϕ)θ2 (1.16)

where

Q(θ,ϕ) = log(f(𝐱,𝐲|θ))f(𝐲|𝐱,ϕ)d𝐲,
H(θ,ϕ) = log(f(𝐲|𝐱,θ))f(𝐲|𝐱;ϕ)d𝐲.

Here we use ϕ to avoid differentiation with respect to θ, it will take the same value as θ in the following.

We know how to deal with the Q function. For the H function, we can use

-2H(θ,ϕ)θ2|ϕ=θ=θ^ = var(dlogf(𝐘|𝐱,θ)dθ|𝐗=𝐱,θ^)
= var(dlogf(𝐱,𝐘|θ)dθ-dlogf(𝐱|θ)dθ|𝐗=𝐱,θ^)
= var(dlogf(𝐱,𝐘|θ)dθ|𝐗=𝐱,θ^)

(The second equality follows since, conditional upon 𝐗=𝐱 and θ=θ^, dlogf(𝐱|θ)dθ is a constant and so does not alter the variance.)

The calculation of standard errors is best illustrated using an example. We shall use the genetics problem seen earlier. In that case

[-2Q(θ,ϕ)θ2]ϕ=θ=θ^ = E[Y|X1=x1,θ^]+x4θ^2+x2+x3(1-θ^)2
= 63.830.62682+38(1-0.6268)2
= 435.3,

since

Q(θ,θ*)=K+(E[Y|X1=x1,θ*]+x4)log(θ)+(x2+x3)log(1-θ),

and simply replace θ* by ϕ.

This is the information if the estimated data (expected value for y) had been genuine. We also have that:

dlogf(𝐱,y|θ)dθ=y+x4θ-x2+x31-θ.

Note that all the observed terms have variance 0 conditioning on 𝐱, so

var(dlogf(𝐱,Y)dθ|𝐗=𝐱,θ^) = 1θ^2var(Y|X1=x1,θ^)
= 125θ^2(θ^2+θ^)(22+θ^)
= 57.8

since Y|X1=125Bin(125,θ2+θ).

Therefore we have that

-d2logf(𝐱|θ^)dθ2=435.3-57.8=377.5

giving the standard error of θ^ as 1/377.5=0.0515. Note that if the augmented data were genuine then the standard error of θ^ would be 1/435.3=0.0479. Thus as we would expect the presence of missing data leads to larger standard errors in the parameters.

1.8 Lab Session and R code

Each week R code associated with the examples in the lecture notes will be provided for practice either before or during the lab session. Ideally you should have a look at the R code in advance of the lab session.

Code for the genetic linkage data and normal mixture example are available on Moodle with a supplementary file for simulating data from a mixture of normal distributions. Familiarise yourself with these algorithms and generate different data sets to try out the normal mixture code.

For the right censored data example there is the outline of a function to implement the EM algorithm. Complete the EM algorithm code and test on the data given in the lecture notes.

Spend no more than 40 minutes on the above.

Epidemic Example

The Reed-Frost epidemic model is suitable for modelling a disease outbreak in a household of size n. The model assumes that the population is homogeneously mixing (infections are equally likely to be with anybody) and is an SIR (SusceptibleInfectedRemoved) model where individuals can only be infected once. Each infectious individual has probability p, whilst infectious, of infecting a susceptible individual in the household. Independence in the chance of infecting two different susceptibles.

Consider a household of size 3 with 1 initial infective and 2 initially susceptible individuals. The initial infective can infect either 0, 1 or 2 of the initial susceptibles and these events occur with probability (1-p)2, 2p(1-p) and p2, respectively. In the case that the initial infective infects nobody the epidemic finishes with only one person infected. In the case that the initial infective infects two individuals, everybody has become infected and the epidemic infects three people. In the remaining case, where one initial susceptible is infected by the initial infective either the second infective infects the remaining susceptible (probability p) or not (probability 1-p). Therefore there are four distinct epidemic outcomes in the household.

OutcomeProbability{1}(1-p)2{1,1}2p(1-p)×(1-p){1,1,1}2p(1-p)×p{1,2}p2

The first outcome {1} results in one person infected, the second outcome {1,1} results in two people infected and the final two outcomes {1,1,1} and {1,2} result in all three people infected.

Suppose that a community consists of 334 households of size 3 with the following the epidemic final size data observed.

Final size123No. of Households3425275

This is the Rhode Island measles data set given in Bailey (1975), p.254.

  1. 1.

    Write down the likelihood L(p;𝐱) of 𝐱=(x1,x2,x3) given p, where xi denotes the total number of households with i people infected.

  2. 2.

    (Leave this question to the end, if you are short on time.) Find ddpl(p;𝐱)=ddplogL(p;𝐱). Solve ddpl(p;𝐱)=0 to find the MLE, p^.

  3. 3.

    Let y denote the total number of households where the initial infective infects both the other individuals in the household. i.e. Outcome {1,2} occurs.

    Write down the likelihood L(p;𝐱,y) of 𝐱 and y given p.

  4. 4.

    Simplify l(p;𝐱,y)=logL(p;𝐱,y) and compute the MLE, p^ for (𝐱,y).
    (This will help form the M-step of the EM algorithm.)

  5. 5.

    What is the distribution of y given 𝐱 and p?
    Hint: Think about similarities between this problem and the genetics example.

  6. 6.

    Write down Q(p,p*)=Ep*[l(p;𝐱,y)|𝐱].
    (This will help form the E-step of the EM algorithm.)

  7. 7.

    Write an EM algorithm in R to find the MLE p^.

  8. 8.

    Calculate the standard error of the MLE p^.

1.9 Practice Questions

These are exam type questions for your practice.

  1. 1.

    In a biological experiment the number of offspring X of a fruit fly is assumed to have probability mass function given by

    pf(x)+(1-p)g(x)

    with

    f(x)={1x=00otherwise.

    and

    g(x)={λxx!exp(-λ)x=0,1,0otherwise.

    That is, f is a point mass at 0 and g is a Poisson distribution with parameter λ.

    Suppose that x1,x2,,xn are the offspring from n fruit flies and that the parameters p and λ are unknown.

    1. (a)

      Write down the joint-likelihood of p and λ given x1,x2,,xn.

    Let α=1ni=1n1{xi=0} (the proportion of the fruit flies in the sample who have no offspring) and let μ=1ni=1nxi (the mean number of offspring per fruit fly in the sample).

    Let U1,U2,,Un be unobserved independent and identically distributed Bernoulli random variables with P(Ui=1)=p(=1-P(Ui=0)). Suppose that if Ui=1 then Xi is distributed according to f whereas if Ui=0 then Xi is distributed according to g.

    1. (b)

      Show that the joint log-likelihood of p and λ given 𝐱=(x1,x2,,xn) and 𝐔=(U1,U2,,Un) is

      l(p,λ|𝐱,𝐔)=mlogp+(n-m)log(1-p)+nμlogλ-(n-m)λ+A,

      where m=i=1nUi and A is a constant independent of p and λ.

    2. (c)

      Find the maximum likelihood estimates (MLEs) of p and λ given 𝐱=(x1,x2,,xn) and 𝐔=(U1,U2,,Un).

    3. (d)

      Show that

      E[Ui|p,λ,xi]={pp+(1-p)exp(-λ)(xi=0)0(xi>0)
    4. (e)

      Give a description of the EM algorithm in the absence of 𝐔=(U1,U2,,Un) for finding the MLEs of p and λ.

  2. 2.

    Suppose that a group of n patients are tested for angina. The test has a binary outcome, either the patient has angina or not. For i=1,2,,n, let yi=1 if the ith patient has angina and yi=0 otherwise. Each patient also has their cholesterol level taken with xi denoting the cholesterol level of patient i.

    A probit model is assumed for the probability that a patient has angina. That is, if Yi is a binary random variable taking the value 1 if patient i has angina and 0 otherwise, then

    P(Yi=1|xi)=1-Φ(βxi),

    where β is an unknown parameter of interest and Φ() denotes the cumulative distribution function of a standard normal. If WN(0,1), then Φ(w)=P(Ww).

    It is difficult to obtain the maximum likelihood estimate, β^, of β given 𝐲=(y1,,yn) and 𝐱=(x1,,xn). However, data imputation and the EM algorithm can be used to obtain β^.

    Let 𝐳=(z1,,zn) and suppose that zi satisfies

    zi|xi,β N(βxi,1)
    yi|xi,β,zi = 1{zi>0}.

    That is, the angina status of a patient (yi) is deterministic (known) on the basis of the unobserved zi, where zi follows a normal distribution with mean determined by β and xi.

    1. (a)

      Show that the likelihood of β given 𝐲 and 𝐳, L(β;𝐲,𝐳) satisfies

      L(β;𝐲,𝐳)=i=1n1{zi>0}yi1{zi0}1-yi(2π)-n/2exp(-12i=1n(zi-xiβ)2).
    2. (b)

      Compute E[zi|yi=1,xi,β].

    3. (c)

      Compute the maximum likelihood estimate, β~, of β given 𝐲 and 𝐳.
      Hint: Let WN(μ,σ2). Then for any a,

      E[W|W>a]=μ+ϕ((a-μ)/σ)1-Φ((a-μ)/σ)σ,

      where ϕ(z) denotes the probability density function of a standard normal distribution evaluated at z.

    4. (d)

      Outline an EM algorithm for obtaining β^.