Chapter 9.5 Probability of Error
Probability of Error
Since every score has some level of error researchers must decide how much error they are willing to accept prior to performing their research. This acceptable error is then compared with the probability of error and if it is less, the study is said to be significant. For example, if we stated that we would accept 5% error at the onset of the study and our results indicated that the probability of error was 3%, we would reject the null hypothesis and state that the difference between the two groups was significant. If, however, the probability of error were shown to be 6%, we would accept the null hypothesis and state that the difference between the two groups was not significant.
The probability of error is often abbreviated with a lower case ‘p,’ and the acceptable error is abbreviated with a lower case alpha (a). When we accept the null, then p > a, and when we reject the null, then p < = a. You will often see these symbols at the end of significance statements in research reports. While alpha can change, depending on the level set at the onset of the experiment, it should not change once the experiment begins. Common levels of acceptable error (referred to as significance) include, in order of use, 0.05, 0.01, 0.001, and 0.1.