• Purchase/rental options available:
In lieu of an abstract, here is a brief excerpt of the content:

• Change or Not to Change—A Rose by Any Other Name:A Response to Pascarella and Wolniak

One danger of point-counterpoint articles is that readers may lose sight of the original issues. I want to take a moment to identify the points of agreement in recently published papers about the analysis of change and then talk about points of disagreement in those papers. First and foremost, Pascarella and Wolniak (in this issue) and I agree that it is important to study how students change as a result of college.Longitudinal designs are not only the most internally valid approach, they are essential. (Longitudinal designs do not have to be pretest-posttest designs, although I will limit myself to those designs in this response.) It also follows that higher-education researchers should be interested in making causal statements about how college affects students, rather than simply describing what happens to students during college.

I agree with Pascarella and his colleagues (2003) that the negative correlation between gains and initial status, which is due both to regression to the mean and measurement error, can confound college-effects research. I also agree that an analysis of covariance, as they labeled their approach in their initial article, will explain more of the total variance in an outcome measure than an analysis of gain scores without the pretest, if the pretest is correlated with the posttest.

My disagreement with Pascarella and Wolniak's response in this issue concerns the significance of their original findings. One of the points I attempted to make in my reanalysis was that Ernie and his colleagues had proved a statistical truism: Rescaling a variable does not change its correlation with other variables. As I indicated in my article, [End Page 355] the pretest-posttest in Equation 2 is nothing more than a rescaled version of the Posttest in Equation 1. Consequently, the fact that the regression coefficients for "x," "y," and "z" are the same simply proves a basic principle of statistics. It does not provide new models for the study of how students change as a result of college.

I have to agree with Pascarella and Wolniak(this issue) on one other point. My article had relatively little to do with the claims made in their original article. One of the points I tried to make is that differences in results associated with traditional gain/difference scores and Pascarella's model have little to do with internal validity and a great deal to do with the research questions being asked and answered. Subtracting a measure of ability obtained prior to college from the same measure at graduation provides an unequivocal indication of how much a student changed. Granted this is a problematic measure when it comes time to identify how college influenced change, but it is a measure of change. As Frederic Lord (1967) observed, analysis of covariance models described by Pascarella, Wolniak, and Pierson (2003) do not answer the question: How have students changed? Instead, these models answer the question: Would students exposed to different college experiences (e.g., number of hours worked) have different outcomes if they had the same characteristics at entry? It is this subtle distinction that led Len Baird (1988) and others to conclude that approaches, such as the one proposed by Pascarella and his colleagues (2003), do not provide a measure of change per se.

Holland and Rubin (1983) make another important point in their tribute to Frederic Lord. They noted that the correlational and quasi-experimental designs that characterize most higher-education research can be used to make qualified causal claims about how college affects students only if researchers are willing to make untestable assumptions about pretest and posttest scores for students and groups. Because the fundamental assumptions underlying these studies of college effects cannot be tested, claims that one approach is more internally valid than another, based on estimates of explained variance, are inappropriate.

# References

Baird, L. L. (1988). Value added: Using student gains as yardsticks of learning. In C. Adelman (Ed.), Performance and judgment: Essays on principles and practice in the assessment of college student learning (pp. 205-216). Washington, DC: U.S. Government Printing Office...

• Purchase/rental options available:

## Additional Information

ISSN
1543-3382
Print ISSN
0897-5264
Pages
pp. 355-356
Launched on MUSE
2004-07-26
Open Access
No

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.