- Investigating Differences Between Low- and High-Stakes Test Performance on a General Education Exam
There is increasing pressure for institutions of higher education in the United States to objectively document student learning outcomes (Lederman, 2007; U.S. Department of Education, 2006). Criticism of higher education is mounting, with the result being that educational institutions need to be more accountable for student learning on their campuses (Palomba & Banta, 1999).
As a result, there is increased interest in the viability and use of standardized tests to assess student academic achievement in higher education. Currently many tests are used for the purpose of collegiate assessment, including Collegiate Learning Assessment (CLA), College Basic Academic Subjects Examination (College BASE), Collegiate Assessment of Academic Proficiency, and the Measure of Academic Proficiency and Progress (MAPP), published by the Council for Aid to Education, Assessment Resource Center, ACT, and ETS, respectively. These exams are typically administered to undergraduates as "low-stakes" tests to assess critical thinking, domain-specific content knowledge, writing, and other subject areas generally covered in a general education program.
The use of standardized exams is often referred to as "direct assessment" of student learning and considered a powerful tool for assessing learning outcomes (Kuh, Kinzie, Buckley, Bridges, & Hayek, 2006; Pascarella & Terenzini, 2005). [End Page 119] However, such exams often have a shortcoming that is not always recognized or acknowledged—that is, for students, the tests most often have no consequences. We term these "low-stakes" tests. Whereas these are low-stakes tests for the students, they are sometimes high stakes for institutions because accreditation and legislative funding can hinge on test score data. The purpose of this study was to investigate test performance differences among undergraduate students under low- and high-stakes test conditions.
Throughout the school year, college students participate in a number of academic activities, some of which students enjoy and some of which they do not. One situation that many students may not enjoy is taking standardized achievement tests. This may be especially true for standardized exams that students are required to take but which have no meaningful outcome or consequence for the student (Cole & Bergin, 2005; Paris, Lawton, Turner, & Roth, 1991; Smith & Smith, 2004). These are often referred to as "low-stakes" exams, and the concern is that students in these testing situations may not be highly motivated to try their best (Smith & Smith, 2004; Sundre & Kitsantas, 2004; Wise & DeMars, 2005). Recognizing that test performance is a function of both knowledge and motivation, the possibility of low student motivation raises the concern of whether data collected are a valid measure of student achievement (Eklof, 2006; Wainer, 1993). As noted by Eklof (2006), "Ignoring the test-taking motivation component in low-stakes achievement testing could lead to a confounding of knowledge and motivation and thereby be a threat to the validity of the results" (p. 644). This concern has proven to be a major challenge for educational institutions that make decisions based on these test results. Institutions want to be confident that their students' test scores accurately represent (to the extent possible) student knowledge in the academic subject areas tested. According to Erwin and Wise (2002), "The challenge to motivate our students to give their best effort when there are few or no personal consequences is probably the most vexing assessment problem we face" (p. 71).
Understanding the consequences of a test is the critical element in deciding whether the test is a low- or high-stakes test (DeCesare, 2002; Goertz & Duffy, 2003). We define a low-stakes exam as any exam that has no meaningful consequence to the test taker. Conversely, a high-stakes test has at least some academic or other meaningful consequence to the student. For example, at present twenty-two states have an exit exam for high school students (Kober, Zabala, Chudowsky, Chudowsky, Gayler, & McMurrer, 2006). These states require students to achieve a minimum cut score in order to graduate from high school. Clearly, these states are employing high-stakes testing for their high [End Page 120] school students: there is a clear and unambiguous consequence to students based on their test score. Another example is the...