This is “A Complete Example”, section 10.8 from the book Beginning Statistics (v. 1.0). For details on it (including licensing), click here.

For more information on the source of this book, or why it is available for free, please see the project's home page. You can browse or download additional books there.

Has this book helped you? Consider passing it on:
Creative Commons supports free culture from music to education. Their licenses helped make this book available to you.
DonorsChoose.org helps people like you help teachers fund their classroom projects, from art supplies to books to calculators.

10.8 A Complete Example

Learning Objective

  1. To see a complete linear correlation and regression analysis, in a practical setting, as a cohesive whole.

In the preceding sections numerous concepts were introduced and illustrated, but the analysis was broken into disjoint pieces by sections. In this section we will go through a complete example of the use of correlation and regression analysis of data from start to finish, touching on all the topics of this chapter in sequence.

In general educators are convinced that, all other factors being equal, class attendance has a significant bearing on course performance. To investigate the relationship between attendance and performance, an education researcher selects for study a multiple section introductory statistics course at a large university. Instructors in the course agree to keep an accurate record of attendance throughout one semester. At the end of the semester 26 students are selected a random. For each student in the sample two measurements are taken: x, the number of days the student was absent, and y, the student’s score on the common final exam in the course. The data are summarized in Table 10.4 "Absence and Score Data".

Table 10.4 Absence and Score Data

Absences Score Absences Score
x y x y
2 76 4 41
7 29 5 63
2 96 4 88
7 63 0 98
2 79 1 99
7 71 0 89
0 88 1 96
0 92 3 90
6 55 1 90
6 70 3 68
2 80 1 84
2 75 3 80
1 63 1 78

A scatter plot of the data is given in Figure 10.13 "Plot of the Absence and Exam Score Pairs". There is a downward trend in the plot which indicates that on average students with more absences tend to do worse on the final examination.

Figure 10.13 Plot of the Absence and Exam Score Pairs

The trend observed in Figure 10.13 "Plot of the Absence and Exam Score Pairs" as well as the fairly constant width of the apparent band of points in the plot makes it reasonable to assume a relationship between x and y of the form

y=β1x+β0+ε

where β1 and β0 are unknown parameters and ε is a normal random variable with mean zero and unknown standard deviation σ. Note carefully that this model is being proposed for the population of all students taking this course, not just those taking it this semester, and certainly not just those in the sample. The numbers β1, β0, and σ are parameters relating to this large population.

First we perform preliminary computations that will be needed later. The data are processed in Table 10.5 "Processed Absence and Score Data".

Table 10.5 Processed Absence and Score Data

x y x2 xy y2 x y x2 xy y2
2 76 4 152 5776 4 41 16 164 1681
7 29 49 203 841 5 63 25 315 3969
2 96 4 192 9216 4 88 16 352 7744
7 63 49 441 3969 0 98 0 0 9604
2 79 4 158 6241 1 99 1 99 9801
7 71 49 497 5041 0 89 0 0 7921
0 88 0 0 7744 1 96 1 96 9216
0 92 0 0 8464 3 90 9 270 8100
6 55 36 330 3025 1 90 1 90 8100
6 70 36 420 4900 3 68 9 204 4624
2 80 4 160 6400 1 84 1 84 7056
2 75 4 150 5625 3 80 9 240 6400
1 63 1 63 3969 1 78 1 78 6084

Adding up the numbers in each column in Table 10.5 "Processed Absence and Score Data" gives

Σx=71,Σy=2001,Σx2=329,Σxy=4758, andΣy2=161511.

Then

SSxx=Σx21n(Σx)2=329126(71)2=135.1153846SSxy=Σxy1n(Σx)(Σy)=4758126(71)(2001)=706.2692308SSyy=Σy21n(Σy)2=161511126(2001)2=7510.961538

and

x-=Σxn=7126=2.730769231 andy-=Σyn=200126=76.96153846

We begin the actual modelling by finding the least squares regression line, the line that best fits the data. Its slope and y-intercept are

β^1=SSxySSxx=706.2692308135.1153846=5.227156278β^0=y-β^1x-=76.96153846(5.227156278)(2.730769231)=91.23569553

Rounding these numbers to two decimal places, the least squares regression line for these data is

y^=5.23x+91.24.

The goodness of fit of this line to the scatter plot, the sum of its squared errors, is

SSE=SSyyβ^1SSxy=7510.961538(5.227156278)(706.2692308)=3819.181894

This number is not particularly informative in itself, but we use it to compute the important statistic

sε=SSEn2=3819.18189424=12.11988495

The statistic sε estimates the standard deviation σ of the normal random variable ε in the model. Its meaning is that among all students with the same number of absences, the standard deviation of their scores on the final exam is about 12.1 points. Such a large value on a 100-point exam means that the final exam scores of each sub-population of students, based on the number of absences, are highly variable.

The size and sign of the slope β^1=5.23 indicate that, for every class missed, students tend to score about 5.23 fewer points lower on the final exam on average. Similarly for every two classes missed students tend to score on average 2×5.23=10.46 fewer points on the final exam, or about a letter grade worse on average.

Since 0 is in the range of x-values in the data set, the y-intercept also has meaning in this problem. It is an estimate of the average grade on the final exam of all students who have perfect attendance. The predicted average of such students is β^0=91.24.

Before we use the regression equation further, or perform other analyses, it would be a good idea to examine the utility of the linear regression model. We can do this in two ways: 1) by computing the correlation coefficient r to see how strongly the number of absences x and the score y on the final exam are correlated, and 2) by testing the null hypothesis H0:β1=0 (the slope of the population regression line is zero, so x is not a good predictor of y) against the natural alternative Ha:β1<0 (the slope of the population regression line is negative, so final exam scores y go down as absences x go up).

The correlation coefficient r is

r=SSxySSxxSSyy=706.2692308(135.1153846)(7510.961538)=0.7010840977

a moderate negative correlation.

Turning to the test of hypotheses, let us test at the commonly used 5% level of significance. The test is

H0:β1=0vs.Ha:β1<0@α=0.05

From Figure 12.3 "Critical Values of ", with df=262=24 degrees of freedom t0.05=1.711, so the rejection region is (,1.711]. The value of the standardized test statistic is

t=β^1B0sεSSxx=5.227156278012.11988495135.1153846=5.013

which falls in the rejection region. We reject H0 in favor of Ha. The data provide sufficient evidence, at the 5% level of significance, to conclude that β1 is negative, meaning that as the number of absences increases average score on the final exam decreases.

As already noted, the value β1=5.23 gives a point estimate of how much one additional absence is reflected in the average score on the final exam. For each additional absence the average drops by about 5.23 points. We can widen this point estimate to a confidence interval for β1. At the 95% confidence level, from Figure 12.3 "Critical Values of " with df=262=24 degrees of freedom, tα2=t0.025=2.064. The 95% confidence interval for β1 based on our sample data is

β^1±tα2sεSSxx=5.23±2.06412.11988495135.1153846=5.23±2.15

or (7.38,3.08). We are 95% confident that, among all students who ever take this course, for each additional class missed the average score on the final exam goes down by between 3.08 and 7.38 points.

If we restrict attention to the sub-population of all students who have exactly five absences, say, then using the least squares regression equation y^=5.23x+91.24 we estimate that the average score on the final exam for those students is

y^=5.23(5)+91.24=65.09

This is also our best guess as to the score on the final exam of any particular student who is absent five times. A 95% confidence interval for the average score on the final exam for all students with five absences is

y^p±tα2sε1n+(xpx-)2SSxx=65.09±(2.064)(12.11988495)126+(52.730769231)2135.1153846=65.09±25.015442540.0765727299=65.09±6.92

which is the interval (58.17,72.01). This confidence interval suggests that the true mean score on the final exam for all students who are absent from class exactly five times during the semester is likely to be between 58.17 and 72.01.

If a particular student misses exactly five classes during the semester, his score on the final exam is predicted with 95% confidence to be in the interval

y^p±tα2sε1+1n+(xpx-)2SSxx=65.09±25.015442541.0765727299=65.09±25.96

which is the interval (39.13,91.05). This prediction interval suggests that this individual student’s final exam score is likely to be between 39.13 and 91.05. Whereas the 95% confidence interval for the average score of all student with five absences gave real information, this interval is so wide that it says practically nothing about what the individual student’s final exam score might be. This is an example of the dramatic effect that the presence of the extra summand 1 under the square sign in the prediction interval can have.

Finally, the proportion of the variability in the scores of students on the final exam that is explained by the linear relationship between that score and the number of absences is estimated by the coefficient of determination, r2. Since we have already computed r above we easily find that

r2=(0.7010840977)2=0.491518912

or about 49%. Thus although there is a significant correlation between attendance and performance on the final exam, and we can estimate with fair accuracy the average score of students who miss a certain number of classes, nevertheless less than half the total variation of the exam scores in the sample is explained by the number of absences. This should not come as a surprise, since there are many factors besides attendance that bear on student performance on exams.

Key Takeaway

  • It is a good idea to attend class.

Exercises

    The exercises in this section are unrelated to those in previous sections.

  1. The data give the amount x of silicofluoride in the water (mg/L) and the amount y of lead in the bloodstream (μg/dL) of ten children in various communities with and without municipal water. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find SSE, sε, and r, and so on). In the hypothesis test use as the alternative hypothesis β1>0, and test at the 5% level of significance. Use confidence level 95% for the confidence interval for β1. Construct 95% confidence and predictions intervals at xp=2 at the end.

    x0.00.01.11.41.6y0.30.14.73.25.1 x1.72.02.02.22.2y7.05.06.18.69.5
  2. The table gives the weight x (thousands of pounds) and available heat energy y (million BTU) of a standard cord of various species of wood typically used for heating. Perform a complete analysis of the data, in analogy with the discussion in this section (that is, make a scatter plot, do preliminary computations, find the least squares regression line, find SSE, sε, and r, and so on). In the hypothesis test use as the alternative hypothesis β1>0, and test at the 5% level of significance. Use confidence level 95% for the confidence interval for β1. Construct 95% confidence and predictions intervals at xp=5 at the end.

    x3.373.504.294.004.64y23.617.520.121.628.1 x4.994.945.483.264.16y25.327.030.718.920.7

    Large Data Set Exercises

  1. Large Data Sets 3 and 3A list the shoe sizes and heights of 174 customers entering a shoe store. The gender of the customer is not indicated in Large Data Set 3. However, men’s and women’s shoes are not measured on the same scale; for example, a size 8 shoe for men is not the same size as a size 8 shoe for women. Thus it would not be meaningful to apply regression analysis to Large Data Set 3. Nevertheless, compute the scatter diagrams, with shoe size as the independent variable (x) and height as the dependent variable (y), for (i) just the data on men, (ii) just the data on women, and (iii) the full mixed data set with both men and women. Does the third, invalid scatter diagram look markedly different from the other two?

    http://www.flatworldknowledge.com/sites/all/files/data3.xls

    http://www.flatworldknowledge.com/sites/all/files/data3A.xls

  2. Separate out from Large Data Set 3A just the data on men and do a complete analysis, with shoe size as the independent variable (x) and height as the dependent variable (y). Use α=0.05 and xp=10 whenever appropriate.

    http://www.flatworldknowledge.com/sites/all/files/data3A.xls

  3. Separate out from Large Data Set 3A just the data on women and do a complete analysis, with shoe size as the independent variable (x) and height as the dependent variable (y). Use α=0.05 and xp=10 whenever appropriate.

    http://www.flatworldknowledge.com/sites/all/files/data3A.xls

Answers

  1. Σx=14.2, Σy=49.6, Σxy=91.73, Σx2=26.3, Σy2=333.86.

    SSxx=6.136, SSxy=21.298, SSyy=87.844.

    x-=1.42, y-=4.96.

    β^1=3.47, β^0=0.03.

    SSE=13.92.

    sε=1.32.

    r = 0.9174, r2 = 0.8416.

    df=8, T = 6.518.

    The 95% confidence interval for β1 is: (2.24,4.70).

    At xp=2, the 95% confidence interval for E(y) is (5.77,8.17).

    At xp=2, the 95% prediction interval for y is (3.73,10.21).

  1. The positively correlated trend seems less profound than that in each of the previous plots.

  2. The regression line: y^=3.3426x+138.7692. Coefficient of Correlation: r = 0.9431. Coefficient of Determination: r2 = 0.8894. SSE=283.2473. se=1.9305. A 95% confidence interval for β1: (3.0733,3.6120). Test Statistic for H0:β1=0: T = 24.7209. At xp=10, y^=172.1956; a 95% confidence interval for the mean value of y is: (171.5577,172.8335); and a 95% prediction interval for an individual value of y is: (168.2974,176.0938).