Math330 Additional Exercises 2

This sheet gives additional exercises covering all of the course.

  1. 1.

    Let X1,,Xn be independent with XiN(α+βzi,1) for known constants {zi}.

    Calculate the log-likelihood (α,β).

    Hence calculate the MLE, (α^,β^).

    Calculate also the Fisher’s information matrix IE(α,β).

    Give a condition on {zi} which leads to the orthogonality of α and β. (2006 A2)

  2. 2.

    Suppose X1,,Xn are IID Normal (μ,σ2) random variables with 𝜽 =(μ,σ) unknown. Find

    • (a)

      the maximum likelihood estimator, 𝜽^, of 𝜽;

    • (b)

      the maximum likelihood estimator of Pr(i=1nXi<10);

    • (c)

      the asymptotic variance matrix for 𝜽^ by using the observed information at 𝜽^;

    • (d)

      an approximate 95% confidence interval for μ.

    (1997 A3)

  3. 3.

    Consider the normal model

    f(x|μ,σ)=(2πσ2)-12exp{-(x-μ)22σ2}

    where -<x<. Find an expression for (μ^,σ^) and calculate its asymptotic distribution.

    Particular interest lies in the parameter ϕ=μσ2. Calculate ϕ^ and its asymptotic distribution. (2006 B3 part)

  4. 4.

    Assume that X1,,Xn are independent random variables with a Normal(μ,σ2) distribution. It is well known that, in this case, the maximum likelihood estimator of 𝜽=(μ,σ) is given by

    μ^=X¯=i=1nXin

    and

    σ^=(i=1n(Xi-X¯)2n)1/2

    Furthermore the expected information matrix is

    IE(μ,σ)=(n/σ2002n/σ2).
    • (a)

      Find the asymptotic distribution of the maximum likelihood estimator of 𝜽=(μ,σ).

    • (b)

      Briefly comment on the implications that the diagonality of the information matrix IE(μ,σ) has for parameter estimation.

    • (c)

      Give the maximum likelihood estimator of ϕ=g(𝜽)=σ/μ.

    • (d)

      Find the asymptotic distribution of the maximum likelihood estimator of ϕ=g(𝜽)=σ/μ, stating the general result used.

    (2005 A3)

  5. 5.

    Suppose x1,,xn are observations independently drawn from

    XiPoisson(μi)

    where i=1,,n, and μi=exp(αwi+βzi), and wi and zi are two known covariates associated with observation i.

    • (a)

      Explain why exp(αwi+βzi) is a more suitable specification for μi than αwi+βzi.

    • (b)

      Write down the log-likelihood function, l(α,β).

    • (c)

      Show that the inverse of the expected information matrix is given by

      IE(α,β)-1=1Δ(α,β)(izi2exp(αwi+βzi)-iwiziexp(αwi+βzi)-iwiziexp(αwi+βzi)iwi2exp(αwi+βzi)),

      where

      Δ(α,β)=(iwi2exp(αwi+βzi))(izi2exp(αwi+βzi))-(iwiziexp(αwi+βzi))2.

      Note: You may use the following result

      (abcd)-1=1ad-bc(d-b-ca).
    • (d)

      Find the asymptotic distribution of the maximum likelihood estimator of μ1, where μ1 is the expected value of X1. Quote all results used.

    (2004 B1)