Design and Analysis in College Impact Research: Which Counts More?
In lieu of an abstract, here is a brief excerpt of the content:

Design and Analysis in College Impact Research:
Which Counts More?

Over the last several decades student affairs and assessment scholars who study college impact have utilized a number of different research design and statistical procedures in an attempt to control for the characteristics and propensities that lead students to self-select themselves into a particular intervention or experience. This is particularly important because such characteristics and propensities may seriously confound any estimate of the effect of the intervention or experience itself. By far the most common method used in the college impact literature to date has been covariate adjustment, based on different multiple regression approaches (Pascarella & Terenzini, 2005). This approach relies on statistical control to remove or partial out the confounding effects of student self-selection. Recently, however, there has been considerable criticism of covariate adjustment based on the argument that its estimate of the effect of an intervention or experience can be biased. Rather than relying on regression-based covariate adjustment techniques, a number of scholars have suggested the use of propensity score matching as a more effective analytical approach for controlling the effects of demographic, attitudinal, or other factors that might increase or decrease students' likelihood of self-selecting into a given treatment of interest and, thereby, isolating the effect of the treatment itself (e.g., Reynolds & DesJardins, 2009; Schneider, Carnoy, Kilpatrick, Schmidt, & Shavelson, 2007).

In this study we employ both covariate adjustment and propensity score matching to estimate the causal influence of an example intervention—the first year of attendance at a liberal arts college (as opposed to another type of 4-year institution). Specifically we estimated the effect of liberal arts college attendance on three cognitive outcomes. We examined the estimates yielded by these two analytical approaches under different research design assumptions—with and without a precollege measure of each outcome. Our purposes were to determine the comparability of causal estimates using covariate adjustment and propensity score matching, and to examine how these estimates might be affected when different research designs are employed to study college impact. The focus of the study was not specifically on understanding the effects of liberal arts colleges, rather, estimating of the effects of liberal arts colleges versus other 4-year institutions is used only as an example. The approaches we explored could have relevance to estimating of the effects of a broad range of between-college and within-college interventions or experiences. [End Page 329]


Sample and Data Collection

We analyzed data from the first year of the Wabash National Study of Liberal Arts Education (WNS), which is a longitudinal pretest-posttest investigation of the effects of liberal arts experiences on a range of college cognitive and noncognitive outcomes thought to be associated with undergraduate liberal arts education. The colleges and universities participating in WNS represent a diverse selection of institutions, varying in institutional characteristics such as type and control, selectivity, enrollment, and location within the United States. For our data analysis sample we chose the 2006 WNS iteration, which collected extensive precollege data on students in early Fall 2006 and again in Spring 2007. Our analyses were based on first-year, full-time undergraduates attending 17 different 4-year institutions (11 liberal arts colleges, 3 research universities, and 3 regional institutions). We estimated the effects of the first year of liberal arts college attendance on three cognitive/learning orientation outcome measures: critical thinking skills, need for cognition (a measure of continuing motivation for learning), and positive attitude toward literacy activities. Because of matrix sampling in part of the WNS design, complete precollege and end-of-first-year data were available for 1,377 students on one dependent measure (critical thinking skills) and 2,872 students on the other two dependent measures (need for cognition and positive attitude toward literacy). Although there are clearly limitations with respect to the external validity or generalizability of results obtained with the 17-institution WNS sample, our concern was with estimating the internal validity of the effects of liberal arts colleges. Moreover, as our results are intended for didactic rather than inferential purposes, concerns with generalizing the results of the analyses...