How Important are High Response Rates for College Surveys?
Surveys play an important role in understanding the higher education landscape. About 60 percent of the published research in major higher education journals utilized survey data (Pike, 2007). Institutions also commonly use surveys to assess student outcomes and evaluate programs, instructors, and even cafeteria food. However, declining survey participation rates threaten this source of vital information and its perceived utility. Survey researchers across a number of social science disciplines in America and abroad have witnessed a gradual decrease in survey participation over time (Brick & Williams, 2013; [End Page 245] National Research Council, 2013). Higher education researchers have not been immune from this trend; Dey (1997) long ago highlighted the steep decline in response rates in the American Council on Education and Cooperative Institutional Research Program (CIRP) senior follow-up surveys from 60 percent in the 1960s to 21 percent in 1991.
Survey researchers have long assumed that the best way to obtain unbiased estimates is to achieve a high response rate. For this reason, the literature on survey methods is rife with best practices and suggestions to improve survey response rates (e.g., American Association for Public Opinion Research, n.d.; Dillman, 2000; Heberlein & Baumgartner, 1978). These methods can be costly or require significant time or effort by survey researchers and may be unfeasible for postsecondary institutions due to the increasing fiscal pressures placed upon them. However, many survey researchers have begun to question the widely held assumption that low response rates provide biased results (Curtin, Presser, & Singer, 2000; Groves, 2006; Keeter, Miler, Kohut, Groves, & Presser, 2000; Massey & Tourangeau, 2013; Peytchev, 2013).
This study investigates this assumption with college student assessment data. It utilizes data from hundreds of samples of first-year and senior students with relatively high response rates using a common assessment instrument with a standardized administration protocol. It investigates how population estimates would have changed if researchers put forth less effort when collecting data and achieved lower response rates and respondent counts. Due to the prevalence of survey data in higher education research and assessment efforts, it is imperative to better understand the relationship between response rates and data quality.
Survey nonresponse bias—the extent to which survey nonresponse leads to inaccurate population estimates—has received extensive attention in the survey research literature (e.g., Curtin et al., 2000; Groves, 2006; Groves & Peytcheva, 2008; Rubin, 1976). Though variation exists with defining non-response bias, most view it as a function of the response rate and nonresponse effect, or how much responders and nonresponders differ on survey variables of interest (Keeter et al., 2000). In other words, low response rates may or may not lead to nonresponse bias because answers to survey items may not differ substantially between responders and nonresponders. The impact of nonresponse on an estimate depends upon the relationship between the outcome of interest and the decision to participate in the survey (Groves 2006). Consequently, if survey participation is not correlated with its content, the answers of responders and non-responders to a survey will not substantially differ. For these reasons, Massey and Tourangeau (2013) state that a high [End Page 246] rate of nonresponse increases the potential for biased estimates, but does not necessarily bias an estimate. Peytchev (2013) goes further and argues that the use of response rate as the singular measure of survey representativeness is flawed, as “it is nonresponse bias that is feared, not nonresponse itself” (p. 89). This ambiguity with nonresponse bias was made explicitly clear when Keeter (2012) reported to the National Science Foundation that, “…there is no comprehensive theory of survey response that can generate reliable predictions about when nonresponse bias will occur” (p. 43). One can infer then that it is incumbent upon researchers to engage in nonresponse bias assessments since no one knows for sure whether bias will exist while using any given instrument with any population of interest.
Due to these insights, survey researchers have increasingly examined the impact of nonresponse on their survey estimates. Perneger, Chamot, and Bovier (2005) assessed nonresponse...