restricted access 3: WRITING CENTER ASSESSMENT: Searching for the “Proof” of Our Effectiveness
In lieu of an abstract, here is a brief excerpt of the content:

3 WRITING CENTER ASSESSMENT Searching for the “Proof” of Our Effectiveness NEAL LERNER Two words that haunt writing center professionals are “research” and “assessment.” The first is too often held out as something others do to us, something we do not have time for, or something that is lacking in our field. The second is tied to our financial and institutional futures— if we cannot assess how well we are doing whatever it is we are supposed to be doing, we are surely doomed. In this chapter, I reclaim these two words in several ways. First, I review the history of calls for our field to answer the assessment bell, calls that act as a sort of evaluative conscience, laying on 20 plus years of guilt about our inability or unwillingness to prove ourselves to our institutions and, ultimately, to ourselves. Next, I offer a critique of the few published studies of writing center effects, pointing out the logical and methodological complications of such work. Then, I turn to the larger assessment movement in higher education, particularly the work being done to study students’ first year in college or university. I take from that research not only useful assessment tools that might be adapted to writing -center settings, but also important cautions about the nature of assessment work and its potential pitfalls. Finally, I offer some examples of real live assessment from the writing center I direct at my institution, not necessarily as exemplars for the field, but instead as indications that the work I call for can, indeed, be done. Overall, my intent here is to offer a clearer understanding of research to provide evidence of writing center “effects,” its uses and limitations, and to put into a critical context the common call to investigate how well we are doing. EVALUATE OR ELSE For any of us engaged in writing center work, it always seems obvious that one-to-one teaching of writing is effective, and this belief has a long history. In 1939, E. C. Beck wrote in English Journal that “perhaps it is Center will hold final 8/26/03 9:23 AM Page 58 not too much to say that the conference method has established itself as the most successful method of teaching English composition” (594). Nevertheless, as writing centers moved from “method” to “site”—as Beth Boquet (1999) describes the evolution of the free-standing writing center—frequent calls for “accountability” followed, usually in response to threats from budget-conscious administrators or misguided faculty. However, the attempts to provide this accountability (or simply call for it) that have appeared in our literature often say more about our field’s uneasiness with evaluation research than about the effectiveness of the work we do. One source of uneasiness is with the use of statistics beyond the simple counting of numbers of students or appointments. In 1982, Janice Neuleib explained this uneasiness by noting that “many academics tend to wring their hands when faced with the prospect of a formal evaluation . English teachers especially have often not been trained in statistics, yet formal evaluation either explicitly or implicitly demands statistics” (227). For Neuleib, “formal” evaluation is necessary because “[good] tutoring and all that goes with it cannot be appreciated without verifiable evaluation techniques” (232). While Neuleib’s call is nearly 20 years old at the time of this writing, it is difficult to say that the field has answered her charge with a rich body of statistical research. The reasons for this absence are many, but most important, in my view, is composition’s orientation toward qualitative or naturalistic studies of students’ composing processes, as Cindy Johanek has pointed out (2000, 56). While I am aware that qualitative evidence can lend a rich and nuanced perspective to our evaluation studies (and have performed and will continue to perform such studies myself), I join Johanek in calling for additional research methods, namely quantitative or statistical ones, to understand more fully the work we do. Statistical evidence also lends itself to short forms, perfect for bullet items, PowerPoint presentations, and short attention spans—in other words, perfect for appeals to administrators and accrediting bodies. I would also argue that despite Neuleib’s statement about our fear of numbers, our field is often under the sway of numerology, given the ways we have always counted who comes through our doors and why. Nancy McCracken of Youngstown State identified the need to evaluate in 1979: “Many of us have...


pdf