Home page for accesible maths 5 Analysis of Variance

Style control - access keys in brackets

Font (2 3) - + Letter spacing (4 5) - + Word spacing (6 7) - + Line spacing (8 9) - +

5.2 One-way ANOVA

Suppose that we have measured a response variable on a sample from our population. Now also suppose that the population can be split into three or more groups according to a second variable. These groups might be contrived e.g. from an experiment run under a number of different conditions, or natural e.g. comparing family income across a number of regions. Before we attempt to answer a question of this type, it is useful to view the data graphically. A sensible way to compare a response variable across groups is to use a boxplot. Boxplots consist of a box (representing the upper and lower quartiles of the sample), with a midline (sample median) and two tails (sample minimum and maximum).

The aim of a one-way ANOVA is to compare the means of the groups. Let μi represent the population mean for group i. If there are m groups, then we test

H0:μ1=μ2==μm

vs.

H1:μ1μ2μm.
TheoremExample 5.2.1 Starlings

The mean mass (grams) of 10 starlings from each of four different roost situations were recorded. Does the mean mass differ between the roosting groups?

> load("starlingsAOV.Rdata")
> boxplot(Mass~Roost,starlings)

The resulting plot can be found in Figure 5.1. There does appear to be a clear difference in the distribution of the weights across the four groups.

Fig. 5.1: Boxplots of starling masses for four different roosting sites.

Note that we cannot test for ordering in the group means e.g. μ1>μ2>>μm, nor can we test whether the mean of group 1 alone differs from the means of all the other groups. The basis of the test is to compare two sums of squares. If the null hypothesis is true, these two sums should both provide an estimate of the population variance. If the null hypothesis is false, then only one of them is an estimate of the population variance. Therefore the ratio of the two sums should be close to 1 only if the null hypothesis holds.

In more detail, let Yji represent the response variable for the i-th individual in group j. Suppose that there are j=1,,m groups and that each group contains n observations. In practice, we can deal with groups which have different numbers of observations, but it makes the presentation slightly more messy.

The basic assumption of any ANOVA is that the random variables Yji are an i.i.d sample with

YijNormal(μj,σ2), (5.1)

so that each group can have a distinct mean, but the variance is the same across groups.

Let Y¯ denote the overall mean and Y¯j denote the mean of the j-th group, and consider the overall sum of squares (SST)

SST=j=1mi=1n(Yji-Y¯)2.

If we consider the summand, then this can be expanded as

(Yji-Y¯)2 =(Yji-Y¯j+Y¯j-Y¯)2
=(Yji-Y¯j)2+2(Yji-Y¯j)(Y¯j-Y¯)+(Y¯j-Y¯)2.

Thus the total sum of squares can be written as

SST =j=1mi=1n(Yji-Y¯)2
=j=1mi=1n[(Yji-Y¯j)2+2(Yji-Y¯j)(Y¯j-Y¯)+(Y¯j-Y¯)2]
=j=1mi=1n(Yji-Y¯j)2+2j=1m[(Y¯j-Y¯)i=1n(Yji-Y¯j)]+nj=1m(Y¯j-Y¯)2.

Now

i=1n(Yji-Y¯j) =i=1nYji-nY¯j
=nY¯j-nYj¯

by the definition of Y¯j, so i=1n(Yji-Y¯j)=0.

And so the total sum of squares can be split into

SST=j=1mi=1n(Yji-Y¯j)2+nj=1m(Y¯j-Y¯)2

The two terms on the right are referred to respectively as the within group (SSW) and the between group (SSB) sums of squares. When calculating these terms, it is usual to compute SST and SSB directly from the data, and then to calculate SSW as

SSW=SST-SSB.

From the sums of squares, we calculate the mean sums of squares,

MSB=SSBm-1

and

MSW=SSWm(n-1).

Under the null hypothesis, both of these quantities can be used to estimate the residual variance σ2. Therefore to carry out the test, we calculate the ratio of these estimators

F=MSBMSW.

If the null hypothesis is true, this ratio will be close to 1. The question is, how far away from 1 does the ratio need to be in order for us to conclude that there is evidence against the null hypothesis? To answer this, we require the sampling distribution of the ratio, under the assumption that H0 is true. We can then obtain the critical region, which will contain all values of the ratio which are sufficiently unusual under H0 to allow us to reject H0.

Under assumption (5.1) both MSB and MSW are the sum of squares of independent Normal random variables. Consequently they each have a χ2 distribution (see results from Math230). In each case, the degrees of freedom of the χ2 distribution is given by the denominator of the estimator, which is the value required to give an unbiased estimator of σ2. This is m-1 for MSB and m(n-1) for MSW. Since the ratio of two χ2 random variables is a random variable with an F-distribution, the required sampling distribution is

FFm-1,m(n-1).

By comparing the test statistic F to this sampling distribution, we can decide whether or not to reject the null hypothesis, usually based on either a critical region or a p-value.

TheoremExample 5.2.2 Starlings again

Recall the starling masses that we saw in Example 5.2.1. Ten starlings were sampled from four different roosts. The data can be found in the file starlings.Rdata. Carry out a one-way ANOVA to test whether the mean weight of starlings varies between roosts. You should state clearly your hypotheses and conclusions.

The hypotheses are

H0:μ1=μ2=μ3=μ4

vs.

H1:μ1μ2μ3μ4.

Next we need to calculate the three sums of squares. First we need the overall and group means. The overall mean is

140j=14i=1010yij=140×3170=79.25

and the within group means are 83.6, 79.4, 78.6 and 75.4. To calculate sums of squares,

SST=i=14j=110(yji-79.25)2=797.5
SSB =10×[(83.6-79.25)2+(79.4-79.25)2+(78.6-79.26)2+(75.4-79.26)2]
=10×34.19
=341.9
SSW=SST-SSB=797.5-341.9=455.6

Next calculate the mean sums of squares

  1. MSB=SSBm-1=341.93=113.97,

  2. MSW=SSWm(n-1)=455.64×9=12.66.

Finally we calculate the test statistic

F=MSBMSW=113.9712.66=9.005.

The degrees of freedom for the sampling distribution are given by the denominators in the between and within mean sums of squares: in this case, 3 and 36. So the critical region at the 5% level of significance is given by

> qf(0.95,3,36)

This gives us a critical value of 2.87 (see Figure 5.1). Since 9.005>2.87 we would reject H0 and conclude that there is evidence of a difference between the mean masses at the four different roosts.

Alternatively, we could calculate the p-value,

> 1-pf(9.005,3,36)

This gives a p-value of 0.000139, which is clearly less that 0.05, so again we would reject the null hypothesis.

Fig. 5.2: The density of the F3,36 sampling distribution for the F-ratio in the starling ANOVA example. The critical region for the test at the 5% level is marked in blue.

Finally, note an alternative way to write the ANOVA assumptions is that the Yji are i.i.d with

YjiNormal(a+bj,σ2),

i=1,,n, j=1,,m.

Here a is the ‘base’ mean level that is common to all groups and bj is the effect on the mean of being in the j-th group. ANOVA gives us a way to test whether or not these means are the same. In the coming sections on linear regression modelling, we will see how we can estimate the size of these effects.