In lieu of an abstract, here is a brief excerpt of the content:

CHAPTER 3 Tools 23 Q Quality assessment depends on having good data to assess. Types of Evidence Chapter 3 features the tools that can be used to generate data and to guide the inferences we make from that data. Data are a series of systematically taken snapshots, all focused on what’s going on in a program. The key to good data is having a system in place that ensures that the snapshots are representative (i.e., not focused on only one aspect or one type of participant) and detailed enough for our purposes. Researchers typically divide data into two types of evidence: direct and indirect. Direct evidence provides a realistic, unfiltered representation of the performance being assessed. It may be a paper that students write, a test they take, or transcripts of their interactions with a consultant. Direct evidence further divides into two subtypes: natural and elicited. Natural data cover anything that would have been produced even if it weren’t needed for assessment purposes (i.e., regular program activities). Elicited data have guidelines for generating the data, such as a writing prompt or a request to talk about mechanical errors in a consulting session that’s being videoed. The trade-off between natural and elicited data involves deciding whether authenticity or comparability is more important. Direct evidence is really just a record of what happened, and therefore it usually requires some type of evaluation guide that ensures that the analysis of the data is systematic . The guide might be a rubric used to evaluate a paper or a coding scheme applied to a transcript. It might also be a specific measure, such as the percentage of sentences without grammatical errors. Evaluation guides do not make the inferences for us, however. They do not tell us if a four is an acceptable score on a rating rubric or if a 10 percent drop in the proportion of sentences without grammatical errors should represent progress. They merely make the process of forming inferences easier by focusing attention on specific types of information in the data. Indirect evidence, on the other hand, arrives in a pre-filtered format. Whereas direct evidence requires someone to analyze it and make inferences about performance levels or achievement based on an evaluation guide, with indirect evidence the inferences are made when the data is collected, often by the students producing the data or by someone observing them. Common tools for generating indirect evidence include questionnaires, attitudinal surveys, and focus groups. When students respond to a question about the most helpful part of a program as part of a focus group, their responses represent inferences about program achievements. With indirect evidence, the purpose of the data collection instrument or tool, therefore, is simply to record the inferences that will be useful for evaluating the goals of the program. Collection Procedures Whether we are gathering direct or indirect evidence, we need to have a principled procedure for collecting it. Some basic issues to consider include: • Who? Should we sample every participant or choose a cross-section representing the constituencies that we think are significant? • What? Is the data comparable from one participant to another? For example, we probably wouldn’t want to compare a research paper with a narrative essay for the use of sources. • When? Is the data being collected at roughly the same point in the program for each participant? Someone given a survey at the end of a consulting session may respond differently than someone caught two weeks later. • Where? Is the data produced under the same conditions for each participant? In-class writing samples can differ dramatically between a 90-minute and a 60-minute class. The four principal topics in this chapter are: 1. Production Tasks—common ways of generating and eliciting direct evidence 2. Evaluation Guides—tools to guide the analysis of direct evidence 3. Indirect Measures—instruments for eliciting people’s inferences 4. Database Design—issues related to the storage of data in ways that facilitate analysis 24 ASSESSING WRITING, ASSESSING LEARNING [3.17.5.68] Project MUSE (2024-04-26 12:40 GMT) Resources Further Reading Swing, R. L. (2004). Proving and Improving. Volume ll: Tools and techniques for assessing the first college year. The First-Year Experience Monograph Series No. 37. National Resource Center for the First-Year Experience and Students in Transition. Columbia: University of South Carolina. Walvoord, B. E., & Anderson, V. J. (1998). Effective grading: A tool for learning and assessment. San Francisco: Jossey-Bass. Production...

Share