In the same way that we can look at the expected value of an estimator, we can also examine its variance. We continue to assume that is an IID sample from a population with mean and variance .
For a good estimator the variance of the estimator will decrease as the sample size increases. In other words, as the sample size increases the between sample variability in estimates gets smaller and smaller.
As seen in Math230, and discussed above, the variance of the sample mean is
by independence, so
In this example, as the sample size increases, the variance of the estimator tends to zero. This is seen to be a desirable property, and is related to the concept of consistency.
An estimator of a parameter is consistent if it tends to the true value of the parameter as the sample size tends to infinity, i.e. as .
More formally, suppose we have data . Let be the true value of a parameter and let be an estimator. Then we say is a consistent estimator for if for all ,
as .
As a consequence of Chebyshev’s inequality, an unbiased estimator is consistent if its variance tends to zero as (CHECK!).
Clearly is a consistent estimator for .
Show that the (silly!) estimator is an unbiased estimator for , but is not a consistent estimator for . Hint: How does the variance of act as the sample size increases?