A coefficient of internal consistency (and strictly speaking, not a statistical test) that measures the degree to which item responses obtained on some test or instrument at the same time correlate with each other. Technically, it is the mean intercorrelation for all possible item pairs weighted by variances, and taking into account the number of items such that the value obtained ranges from 0 (zero internal consistency) to 1 (perfect internal consistency). As with Cohen’s kappa coefficient, there is no test to adjudicate whether an alpha value is significant. It is, however, commonly accepted that an alpha of 0.70 or higher is acceptable, perhaps because 0.70 indicates that the standard error of measurement will be over half a standard deviation. Alpha is not only a function of the mean intercorrelation, but also of the number of items. Thus, as the number of items is increased, alpha becomes larger, even if the intercorrelations between items are relatively low. Also, it will be higher when within-subject responses are more consistent, interindividual variability is higher, and when there is homogeneity of variances among items. Under some circumstances, alpha may be negative, which reflects a serious problem in coding the data, and thus the need to recode them to ensure that all items are coded in the same direction. When computed for binary (e.g., yes/no) items, Cronbach’s alpha is identical to the Kuder-Richardson-20 formula for composite scales. The coefficient was devised by Lee Cronbach (1916-2001) in 1951 based on the work of Louis Guttman (1916-1987). The formula for the coefficient is given below.