A family of statistical measures that provides an indication of the magnitude of a treatment effect. In contrast to significance tests, they are independent of sample size, and therefore generally used in meta-analyses. Broadly speaking, effect size boils down to two measurements: the standardised difference between two means of two independent groups, and the effect size correlation (i.e., the correlation between the independent variable and scores on the dependent variable). For the first measure, one can either use Cohen’s d (difference between the means divided by the standard deviation of either group) or Hedges g (difference between the means divided by the square root of the mean square error derived from an ANOVA). For the second, it amounts to a point-biserial correlation between a dichotomous independent variable and a continuously scaled dependent variable. While bearing in mind the degree of subjectivity involved, effect sizes of 0.8 and greater are considered to be ‘large’, with those between 0.3 and 0.5 and between 0.0 and .0.2 being classified as ‘medium’ and ‘small’, respectively. Thus, for example, an effect size of 0.8 means that the score of an ‘average’ individual in the experimental group exceeds the scores of 79% of the control group. There is some debate about the best way to compute effect sizes for dependent or repeated measures. In such cases, it is recommended to use the original standard deviations rather than the paired t-test value or the within-subject F value.
See Analysis of variance (ANOVA), Error term, Influence efficacy, Meta-analysis, Precision, Statistical power