Math330 Exercises Week 3

  • WS3.1

    Let X1,,Xn be independent with XiN(α+βzi,1) for known constants {zi}.

    Calculate the log-likelihood (α,β).

    Hence calculate the MLE, (α^,β^).

    Calculate also the Fisher’s information matrix IE(α,β).

    Give a condition on {zi} which leads to the orthogonality of α and β.

  • WS3.2

    Suppose X1Xn are independent identically distributed random variables from pdf:

    f(x|α,γ)={γ-1exp{-(x-α)/γ} when xα0 when x<α

    where γ>0.

    For this model

    𝔼[Xi|α,γ]=α+γ

    where 1in, and

    𝔼[min1inXi]=α+γn.

    Suppose now we have data from this model, x1,,xn. Show that the likelihood function can be written

    L(α,γ)={γ-nexp{-i=1n(xi-α)/γ}, for αxi for all i,0, otherwise .

    By first maximising in α or otherwise, find expressions for (α^,γ^) as functions of x1,,xn.

    Is α^ unbiased for α? Is γ^ unbiased for γ? Briefly justify your answers.

  • QZ3.3

    This question will be completed as the quiz on Moodle.

    Figure 1 (First Link, Second Link) shows a simulated dataset from the Gamma(α,β) distribution, with α=5, β=0.5. Four points are labelled ‘1’ to ‘4’ on the contour plot; corresponding density functions ‘a’ to ‘d’ are superimposed onto the histogram. This week’s Moodle quiz will ask you to match the density functions with the points ‘1’ to ‘4’.

    Figure 1: First Link, Second Link, Caption: Simulated dataset for Gamma(α,β) distribution, with α=5, β=0.5.
  • WS3.4

    (Invariance principle) Let Θd be some parameter space and g:Θd be a one-to-one function (g(θ)g(θ) if θθ). Also, let f(θ;X) be a density function. We want to use g to reparameterise our model class. What is a natural choice for our new parameter space Ξ? Also define the new likelihood function f~(ξ;X) for all ξΞ. Show that if θ^ achieves the highest likelihood, i.e. f(θ^;x)f(θ;x) for all θΘ and some sample x then also f~(g(θ^);x)f~(ξ;x) for all ξΞ. Finally, show that this also holds in the opposite direction, i.e. if f~(g(θ^);x)f~(ξ;x) for all ξΞ then also f(θ^;x)f(θ;x) for all θΘ.

  • CW3.5

    In this exercise, you may use that an Exponential(θ) distribution, with θ>0 has density function

    f(x|θ)=θexp(-θx)

    for x>0, and expected value 1/θ and variance 1/θ2. Let x1,,xn be observations assumed to be independently generated from

    XiExponential(θi)

    where i=1,,n and θi=exp(α+βzi). Here zi is a known covariate for observation i and (α,β) are unknown parameters.

    • (a)

      Show that expected information matrix is given by

      IE(α,β)=(niziiziizi2).

      HINT: Recall that the zis are treated as known and the Xis as random.
      (3 marks)

    • (b)

      Use the expected information matrix to give a condition on the zi under which parameter orthogonality holds. Explain the relevance of parameter orthogonality for inference. (2 marks)

    • (c)

      Find the asymptotic distribution of the maximum likelihood estimator (α^,β^), quoting the general result used. Note: You may use the following result

      (abcd)-1=1ad-bc(d-b-ca).

      (2 marks)

    • (d)

      Find the standard error of β^ in the following two cases: (1) when α is also unknown, and (2) when α is known. Under what circumstances are both standard errors identical? (Justify your answer.) (3 marks)