Home page for accesible maths 2.2 Properties of estimators

Style control - access keys in brackets

Font (2 3) - + Letter spacing (4 5) - + Word spacing (6 7) - + Line spacing (8 9) - +

2.2.2 Variance

In the same way that we can look at the expected value of an estimator, we can also examine its variance. We continue to assume that X1,,Xn is an IID sample from a population with mean μ and variance σ2.

For a good estimator the variance of the estimator will decrease as the sample size n increases. In other words, as the sample size increases the between sample variability in estimates gets smaller and smaller.

TheoremExample 2.2.3 Sample mean cont.

As seen in Math230, and discussed above, the variance of the sample mean is

Var(X¯) =Var(1ni=1nXi)
=1n2i=1nVar(Xi)

by independence, so

Var(X¯) =1n2i=1nσ2
=σ2n.
Remark.

In this example, as the sample size increases, the variance of the estimator tends to zero. This is seen to be a desirable property, and is related to the concept of consistency.

Definition.

An estimator of a parameter θ is consistent if it tends to the true value of the parameter as the sample size tends to infinity, i.e. as n.

More formally, suppose we have data X1,,Xn. Let θ be the true value of a parameter and let θ^(X1,,Xn) be an estimator. Then we say θ^ is a consistent estimator for θ if for all ϵ>0,

Pr[|θ^(X1,,Xn)-θ|>ϵ]0

as n.

As a consequence of Chebyshev’s inequality, an unbiased estimator is consistent if its variance tends to zero as n (CHECK!).

Clearly μ^=X¯ is a consistent estimator for μ.

Exercise.

Show that the (silly!) estimator μ^1=X1 is an unbiased estimator for μ, but is not a consistent estimator for μ. Hint: How does the variance of μ^1 act as the sample size increases?