In lieu of an abstract, here is a brief excerpt of the content:

1 H igher E ducation , F raming , and W riting A ssessment Consider the following scenario, discussed on the Writing Program Administration listserv (WPA-L). The scenario is based on the experiences of a writing program administrator at a large midwestern university: The writing program director learns that “there is a movement afoot” at her university to administer the Collegiate Learning Assessment (CLA) to first-year students and seniors. This will mean that these students will take a ninety-minute essay exam designed to “test” their critical thinking skills. The tests results will be published so that her institution can be compared to others in its category and, if necessary, used to improve any weaknesses that are identified. In listening to the conversations on campus, this program director feels there is an implicit message that the test would be a way of marketing the school as a “first-rate institution.” Although no one explicitly discusses the CLA as an assessment of writing (instead, they say, it is an indication of critical thinking skills), she feels strongly that it will reflect on the writing program. In response to what she is learning as she listens to the discussions on campus, the program director turns to the national community of writing professionals on the WPA-L to get background information for her upcoming meeting with the university assessment committee. She learns that the test is just one indicator the school wants to use to demonstrate “critical thinking ”—although the other indicators were never articulated, at least not to her. After the meeting, she writes a memo to the committee and administrators outlining her concerns based on her knowledge of writing pedagogy, assessment, and the 2 Reframi n g curriculum at her institution. The memo outlines the disadvantages of the test and the possibilities of developing an in-house assessment. This type of assessment, she argues, would better serve the needs of the local institution and would be more likely to improve teaching and learning. By her own admission, she doesn’t know if this detailed memo (almost three pages long) will do any good. However, much later, she learns that the CLA was abandoned, but she doesn’t know why. She “heard that one of the administrators who was involved in this mentioned” that her “memo was ‘unhelpful.’” None of her suggestions for a locally designed assessment that would track students across their undergraduate careers was adopted—nor was it even discussed seriously at the time, she admits. In her words, the CLA “threat” just went away. In the end, she has ideas about the motivation behind the initial move, but she has no concrete evidence . The writing program does do ongoing program review, and it has always been praised by the administration for that review. As the program director, she wasn’t aware of the conversations about using writing as way to assess critical thinking at the university level, and she wasn’t brought into the deliberations . She did, however, react to the news and provide her perspective as the program director and a composition scholar. This example, which has been repeated in various permutations in listserv discussions, in hallway conversations at conferences , and on campuses across the country, illustrates the dilemmas and questions that can emerge around assessment. While the example reflects the experiences of many writing instructors , program directors, department chairs, or other writing professionals, it represents just one type of assessment—institutionally based, university-wide assessment—about which we hear fairly often. As the example also illustrates, the extent to which we writing professionals are “included” (asked or invited to participate ; directed to provide input) in these discussions varies, sometimes from moment to moment or day to day. We know that all postsecondary educators live in a culture where assessment is increasingly important. From the nearly Higher Education, Framing, and Writing Assessment    3 ubiquitous process of program review to assessments (also) linked to accreditation to institutional participation in initiatives like the Voluntary System of Accountability (VSA, which might involve administration of one of three standardized tests: the Collegiate Assessment of Academic Progress [CAAP], Measure of Academic Progress and Proficiency [MAPP], or the Collegiate Learning Assessment [CLA]; or might involve use of the AAC&U’s value rubrics), the sheer number of efforts—and acronyms—surrounding assessment can be at best incredibly confusing, and at worst positively dizzying. Whether a classroom instructor, a course director, someone who works with a writing program, or someone who serves in some other teaching and/or...

Share