

Another possibility, most likely with small sample sizes and small numbers of items, is that while the true population covariances among items are positive, sampling error has produced a negative average covariance in a given sample of cases. A common problem of this type is that the scale consists of some items that are worded in opposite directions to alleviate response biases, and the researcher has forgotten to appropriately recode the reverse scored items, resulting in negative covariances where the actual covariances of interest are positive.

If one encounters a negative value for a, implying a negative average covariance among items, the first thing that should be checked is to see whether data or item coding errors are responsible. This becomes less likely as the numbers of cases and items increases, because sampling variability is reduced. Note that even if the items do satisfy the essential t equivalence assumption, if there is a good deal of error in measurement, sample values of a may be negative even though the population values are positive. This implies that in order for a to be a measure of reliability instead of a lower bound, the items must be measuring the same thing. It must be borne in mind that a is actually a lower bound on the true reliability of a test under general conditions, and that it will only equal the true reliability if the items satisify a property known as essential t equivalence (Lord & Novick), which requires that their true scores are either all the same, or that each item s true score can be converted to any other item s true score by adding a fixed constant. Though this is the most extreme case, SPSS users occasionally present a values that are negative and have magnitudes greater than 1, and want to know how this can happen. Plugging these into the denominator of the ratio in the formula for a, we get Since the covariance s 12 between items 1 and 2 is defined (see, e.g, Lord & Novick) as To see that a can go to –, consider a scale consisting of two items with equal variance and a perfect negative correlation of -1. This can be stated even more simply by saying that a will be negative whenever the average covariance among the items is negative. In words, a will be negative whenever twice the sum of the item covariances is negative. Thus, we can translate the necessary and sufficient condition for a to be negative as Where s ij denotes the covariance between items i and j, and the double summation is taken over all combinations of i and j where i j.
#SPSS CODE FOR RELIABILITY PLUS#
Since the variance of the sum of a set of random variables is equal to the sum of the individual variances plus twice the sum of their covariances (see, e.g., Hays (1981), Appendix C), and since the scale score is the sum of the individual item scores, the scale variance can be expressed as In other words, a will be negative whenever the sum of the individual item variances is greater than the scale variance. Since the term in the first set of brackets is always positive, a will be negative if and only if Where k is the number of items, Ss i 2 is the sum of the individual item variances taken over all k items, and s X 2 is the scale variance. To see that this is the case, let’s look at the most commonly cited formula for computation of Coefficient a, the most popular reliability coefficient. In practice, the possible values of estimates of reliability range from – to 1, rather than from 0 to 1. Unfortunately, the definitions given here include unobservable true and error scores, and when we turn from theory to practice, our attempts to estimate reliabilities can produce unexpected results. R XT 2 = s T 2/ s X 2 = 1 – ( s E 2/ s X 2).Īn essential feature of the definition of a reliability coefficient is that as a proportion of variance, it should in theory range between 0 and 1 in value.

In classical test theory, the total variance of observed scores on a test ( s X 2) is defined (Lord & Novick, 1968) to be the sum of true score variance ( s T 2) and error variance ( s E 2): Principal Support Statistician and Manager of Statistical Support
