In lieu of an abstract, here is a brief excerpt of the content:

85 9¥¥ Analyzing Evaluation Data JOHN McE. DAVIS IN THIS CHAPTER, WE provide introductory advice for analyzing and interpreting evaluation information. We do so by specifically focusing on two data-collection strategies used in the eleventh-grade Chinese telecollaboration scenario: the student focus group and the student questionnaire. For each tool, we describe strategies for summarizing the data and interpreting the results (using Microsoft Excel). We also emphasize the key point that evaluation data analysis and interpretation must be guided by project evaluation questions. Evaluators must take special care to ensure that the conclusions they draw from the evaluation results provide answers directly related to project questions and uses. In addition, we provide guidance for ensuring that analysis and interpretation are systematic processes that lead to trustworthy evidence. Finally, we offer some techniques to help ensure that data analysis and interpretation support evaluation usefulness. Analysis versus Interpretation We use the term“analysis”in this chapter to refer to the process of organizing and summarizing evaluation information so that conclusions can be drawn about what the data mean and how they answer evaluation questions. For example, numerical analysis of questionnaire data would involve calculations to determine frequencies and percentages of particular responses so that the evaluator and users can see easily what the data suggest about respondents’ views. For comments data, analysis typically involves identifying and tracking recurrent themes arising in respondents’ comments from focus groups or interviews, counting how many times a theme arises, and then listing the tallies and proportions for each theme to get a sense of frequent and important ideas. When data are analyzed and ready for interpretation, they become the“results” of the data-collection process. Interpretation is slightly different. Interpretation is the process of drawing conclusions from the analyzed data to give the results meaning. Interpretation can be thought of as a way to answer the “So what?” test. When questionnaire, focus-group, interview, 86 John McE. Davis or assessment results are in hand, how do they matter to the evaluation? What do the results mean? What story do the results seem to tell? What answers do the data suggest in response to the evaluation questions? Interpretation is the final step in the evaluation that synthesizes the results into the key evaluation findings. In the remainder of the chapter we discuss two of the three data-collection methods from the Chinese telecollaboration scenario: a fictional questionnaire sent to students and a fictional student focus-group session. We explore both of these methods—presenting example data along the way—to show how numerical and textual data can be systematically analyzed and interpreted to answer evaluation questions. Trustworthiness of Analysis and Interpretation Before looking at strategies for how to analyze and interpret data, it is important to note that both processes need to be conducted at a high level of quality and rigor to ensure data trustworthiness. Data trustworthiness is crucially important for evaluation to be useful (see chapter 5). Stakeholders and users need to feel that the findings from the evaluation are accurate and free from bias and other problems of measurement. Important decisions cannot be made on the basis of faulty or untrustworthy information. Summaries and interpretations of evaluation data, then, must be conducted carefully and systematically using particular strategies to avoid bias and other errors. Recall from the previous chapters the different ways in which bias and other types of inaccuracy can creep into data design and collection processes. For example, questionnaire items might be written poorly such that they influence respondents to answer in ways that they might not otherwise had the item been written differently. Or, perhaps an interviewer shows excessive disapproval or enthusiasm during an interview, which changes how the interviewee responds, resulting in the evaluation no longer collecting accurate information on interviewee opinions. In this chapter, we highlight threats to data accuracy and trustworthiness that can happen during the analysis and interpretation phases of the evaluation. When analyzing numerical or comments data, special care must be taken to summarize the data in a way that avoids omitting information, distorting information, or otherwise failing to capture important trends or features in the data. For example, having only one evaluator identify themes in comments data may not be enough to ensure that important, recurring ideas have been captured accurately. Or, for numerical data, reporting only an average for rating items (and failing to provide standard deviations) may not give a complete picture of how students or teachers view a particular program issue...


Additional Information

Related ISBN
MARC Record
Launched on MUSE
Open Access
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.