Chapter 9.6 Type I & Type II Errors

Type I and Type II Errors56

Since we are accepting some level of error in every study, the possibility that our results are erroneous are directly related to our acceptable level of error.  If we set alpha at 0.05 we are saying that we will accept 5% error, which means that if the study were to be conducted 100 times, we would expect significant results in 95 studies, and non-significant results in 5 studies.  How do we then know that our study doesn’t fall in the 5% error category?  We don’t.  Only through replication can we get a better idea of this.

There are two types of error that researchers are concerned with: Type I and Type II.  A Type I error occurs when the results of research show that a difference exists but in reality there is no difference.  This is directly related to alpha in that alpha was likely set too high and therefore lowering the amount of acceptable error would reduce the chances of a Type I error.

Lowering the amount of acceptable error, however, also increases the chances of a Type II error, which refers to the acceptance of the null hypothesis when in fact the alternative is true.  When there is a significant difference in the population but we fail to find this difference, our study is said to lack power.  Power, abbreviated with the upper case beta (b), refers to a study’s strength to find a difference when a difference actually exists.  In other words, the greater the chances of a Type I error, the less likely a Type II error, and vice versa.  These two errors are summarized in Figure 9.1

Figure 9.1: Type I and Type II Errors

type1and2error