Chapter 9.3 Research Error

Research Errorsad young man

Every statistic contains both a true score and an error score.  A true score is the part of the statistic or number that truly represents what was being measured.  An error score is that part of the statistic or number that represents something other than what is being measured.  Imagine standing on your bathroom scale and weighing 140 pounds then standing on your doctor’s scale an hour later and weighing 142.  Is it likely you gain 2 pounds on the way to the doctor’s office?

The difference between the two numbers has much more to do with error than it does weight gain, especially in that short of a time span.  When a scale, or any measuring device, provides a score, this score is actually only an estimate of what your true score really is.  When your bathroom scale reads 140 pounds, it should be interpreted as an estimate of your true weight which may actually be 141.  If this is the case, then your score (weight, in this case) of 140 represents 141 pounds of true weight and one pound of error.

Confidence Level.

When we use statistics to summarize any phenomenon, we are always concerned with how much of that statistic represents the true score and how much is error.  Imagine a person scores a 100 on a standardized IQ test.  Is his true IQ really 100 or could this score be off some due to an unknown level of error?  Chances are that there is error associated with his score and therefore we must use this score of 100 as an estimate of his true IQ.  When using an achieved score to estimate a true score, we must determine how much error is associated with it.  Methods to estimate a true score are called estimators, and fall into three main groups: Point Estimation; Interval Estimation; and Confidence Interval Estimation.

Point Estimation.  In point estimation, the value of a sample statistic or achieved score is used as a best guess or quick estimate of the population statistic or true score.  In other words, if a sample of students average 78 on a final examination, you could estimate that all students would average 78 on the same test.  The major concern of point estimation is the lack of concern for error; the achieved score is assumed to be the true score.

Interval Estimation.  Interval estimation goes a step further and assumes that some level of error has occurred in the achieved score, which is almost always the case.  If the sample students achieve an average of 78, we could estimate the amount of error and then provide an estimate of the true score based on an interval rather than a single point.  There are different methods to determine error but perhaps the most commonly used is called the standard error of the mean.

Using a simple statistical formula, the amount of error is determined and the true score is said to be the achieved score plus or minus the standard error of the mean.  For instance, if the students average 78 on their exam and the standard error of the mean is determined to be 3 points, the students’ true average would be estimated as 78 +/- 3 or between 75 and 81.

Confidence Interval Estimation.  The confidence interval estimation uses the same method as the interval estimation but provides a level of confidence or certainty in the true score.  Through more complex statistics, a specific level of confidence in an interval can be determined.  We might say then, based on these statistics, that we are 95% confident that the true score lies somewhere between 78 and 81.  The more confident we are, the larger the interval.

Imagine this exam has a possibility of 100 points.  We would be 100% sure than a student will score somewhere between 0 and 100.  In fact, we are always 100% confident that a true score falls somewhere between the minimum possible score and the maximum possible score.  Narrowing the true score down, however, reduces our level of confidence.  We might only be 98% sure that the true score falls somewhere between 70 and 90, and only 95% confident that the true score falls somewhere between 75 and 81.

A good way to look at confidence interval estimation is to consider the role of a six-sided dice.  How confident would you be that rolling the die once would result in a number between one and six?  You should be 100% confident because those are the only possible scores.  How sure would you be that the role would net an even number or an odd number?  Since half of the numbers are even and half are odd, you would be 50% confident that one of these two possibilities would occur.  Now, what about rolling only a one?  Since there are six possible scores and you are estimating the roll to net only one of those six, you should see the odds as 1:6.  Therefore you would be about 17% confident that the next roll would result in a score of one.  The more we pinpoint the score, the less confident we are in our prediction.