Type 1 error

The error (denoted by ( and hence also referred to as alpha error) in statistical testing of rejecting the null hypothesis (i.e., there is no change, difference or relationship) when it is true and accepting the alternative hypothesis (i.e., there is change, difference or relationship). As such, it can be considered to be a rate of false alarms or false positives. If ( is set at the .05 chance level, then the chance of committing a type I error is 5 times out 100 (given an infinite series of tests). Ways of combating type I errors are to use a more stringent ( level than .05 (e.g., .01), or by means of the Bonferroni inequality procedure (viz., k/(, where k = the number of tests), named after the Italian mathematician Carlo Emilio Bonferroni (1892-1960). In contrast, a type II error (denoted by β, and hence also referred to as beta error) is accepting the null hypothesis when it is false. Thus, it is the rate of failed alarms or false negatives. This error can be counteracted by increasing the power of the test through increasing the size of the sample. More often that not, type I errors are considered to be more important to avoid than type II errors because most investigators aim to assume only a small chance of concluding that something is true when it is not. Stated otherwise, this means it is often safer to be wrong by false denial (type II error) than by false affirmation (type I error). While the exact probability of committing type I error can be computed, it is generally unknown for a type II error. Sometimes, introductory statistical textbooks refer to type III errors as correctly rejecting the null hypothesis, but incorrectly inferring the direction of the effects

. See Statistical power