In lieu of an abstract, here is a brief excerpt of the content:

35 5¥¥ Selecting Methods and Collecting Data for Evaluation TODD H. McKAY and JOHN McE. DAVIS ATTHISSTAGEINanevaluationproject,substantialplanningworkhasbeendonetohelpincrease the likelihood that evaluation findings are useful for programmatic decision-making. Specifically, planning efforts will have led to evaluators, users, and stakeholders identifying the following: 1. Specifically who needs to do something via the evaluation (i.e., the evaluation users) 2. What those groups or individuals need to do with evaluation information (i.e., the evaluation uses) 3. Specifically what they want to investigate (i.e., the evaluation questions) 4. The types of information needed to“answer” the evaluation questions (i.e., the indicators) Ideally, all of these steps will have preceded decisions about specifically how data will be collected or which tools will be used to collect needed information. As argued in chapter 4, the purpose of doing so is to avoid common evaluation planning mistakes (i.e., choosing the wrong tools or not choosing enough tools) and to ensure that the right information is collected given project users’ needs. These planning steps completed, evaluators can now select—and help users select— the specific strategies for collecting evaluation information. Questionnaires, focus groups, interviews,assessments,document review,photos,videos,expert panels,simulations,journals , logs, diaries, observation, testimonials, and so on are all possible choices. Of course, each method has its strengths and weaknesses (see chapters 6–8).Different data-collection methods provide specific types of information,which shed light on evaluation questions in different ways. Questionnaires, for example, can collect information from large numbers of respondents, which can be analyzed relatively quickly and efficiently, though results from selected-response items may provide little insight into program processes. By contrast , interview or focus-group information may provide nuanced insights into complex 36 Todd H. McKay and John McE. Davis program processes, though the full scope of the textual-type data can be difficult to summarize and report in a parsimonious way. In addition to the strengths and weaknesses of particular tools affecting methods decisions, stakeholders and users might privilege certain types of information over others (e.g., textual over numerical or statistical), which may argue for choosing one tool over another. Or practical considerations, like time and expertise, may impact which tool is possible or feasible to use (assessment, for example, can require specialized expertise). Choosing the most appropriate data-collection tool, then, will depend on a number of considerations, not the least of which is the ultimate aim of providing users with the information they need to realize their intended evaluation uses. To that end, evaluators will need to consider various factors to ensure that the right tools are selected: 1. Matching the tool to an indicator 2. Trustworthy information 3. Feasibility, practicality 4. Qualitative, quantitative, or mixed methods 5. Ethical data collection and evaluation 6. Engaging users in methods choices Matching the Tool to an Indicator When selecting a data-collection tool, evaluators need to ensure that the tool or strategy captures evidence directly related to project indicators. This point cannot be stressed enough. Every tool selected for an evaluation should demonstrably link to one or more project indicators. Any tool that fails to collect information on an indicator should be eliminated from the project plan since it will likely fail to collect information relevant to the evaluation questions. A matrix, like the one depicted in table 5.1, should be a part of every evaluation plan and should lay out in a systematic way which tools are being used to capture which indicators. Trustworthy Information For evaluation to be useful, users and other stakeholders need to have confidence in the evaluation results and conclusions. Data collection, then, needs to be conducted in a way that users regard as trustworthy. Users need to view the tools and strategies selected for their evaluation as appropriate to their aims and intended uses. Data trustworthiness corresponds to the research notion of validity and the different ways in which bias, measurement error, and other types of inaccuracy can creep into methods-design and data-collection processes. Trustworthy data collection means that tools and strategies capture information accurately. That is, evaluation tools and strategies should not distort or otherwise change the information they intend to capture because of faulty design or implementation. For example, poorly written questionnaire items may influence respondents to answer in ways that they might not have otherwise had the item been written differently (e.g., in a more neutral way). Or, an interviewer who shows excessive disapproval or enthusiasm during an interview...


Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.