In lieu of an abstract, here is a brief excerpt of the content:

Cooper, H. and Hedges, L. V. (&Is.) 1994. The Handbook ofResearch Synthesis. New York: Russell Sage Foundation 11 EVALUATING CODING DECISIONS ROBERT G. ORWIN R.O.W. Sciences, Inc. CONTENTS 1. Introduction 2. Sources of Error in Coding Decisions 2.1 Deficient Reporting in Primary Studies 2.2 Ambiguities in the Judgment Process 2.3 Coder Bias 2.4 Coder Mistakes 3. Strategies to Reduce Error 3.1 Contacting Original Investigators 3.2 Consulting External Literature 3.3 Training Coders 3.4 Pilot Testing the Coding Protocol 3.5 Possessing Substantive Expertise 3.6 Improving Primary Reporting 3.7 Using Averaged Ratings 4. Strategies to Control for Error 4.1 Reliability Assessment 4.1.1 Rationale 4.1.2 Across-the-board versus item-by-item agreement 4.1.3 Hierarchical approaches 4.1.4 Specific indices of interrater reliability 4.1.5 Selection, interpretation, and reporting of interrater reliability indices 4.1.6 Assessing coder drift 4.2 Confidence Ratings 4.2.1 Rationale 4.2.2 Empirical distinctness from reliability 140 140 140 141 142 143 143 143 143 144 144 144 144 145 145 145 145 146 147 147 151 152 153 153 155 139 140 EVALUATING THE lITERATURE 4.2.3 Methods for assessing coder confidence 155 156 156 157 158 158 158 4.3 Sensitivity Analysis 4.3.1 Rationale 4.3.2 Multiple ratings of ambiguous items 4.3.3 Multiple measures of interrater agreement 4.3.4 Isolating questionable cases 4.3.5 Isolating questionable variables 5. Suggestions for Further Research 5.1 Moving from Observation to Explanation 5.2 Assessing Time and Necessity 158 158 159 6. References 1. INTRODUCTION Coding is a critical part of research synthesis. It represents an attempt to reduce a complex, messy, contextladen , and quantification-resistant reality to a matrix of numbers. Thus, it will always remain a challenge to fit the numerical scheme to the reality, and the fit will never be perfect. Systematic strategies for evaluating coding decisions enable the synthesist to control for much of the error inherent in the process. When used in conjunction with other strategies, they can help reduce error as well. This chapter will discuss strategies to reduce error as well as strategies to control for error and will suggest further research to advance the theory and practice of this particular aspect of the synthesis process. To set the context, however, it will first be useful to describe the sources of error in synthesis coding decisions . 2. SOURCES OF ERROR IN CODING DECISIONS 2.1 Deficient Reporting in Primary Studies Reporting deficiencies in original studies present an obvious problem for the synthesist, to whom the research report is the sole documentation of what was done and what was found. Reporting quality of primary research studies has variously been called "shocking" (Light & Pillemer 1984), "deficient" (Orwin & Cordray 1985), and "appalling" (Oliver 1987).1 In the worst lIt should be noted that significant reporting deficiencies, and their deleterious effect on research synthesis, are not confined to social research areas; see, for example, Wortman and Yeaton's synthesis work (1985) on the effectiveness of coronary artery bypass surgery. 160 case, deficient reporting can force the abandonment of a synthesis.2 Virtually all write-ups will report some information poorly, but some will be so vague as to obscure what took place entirely. The absence of clear and/or universally accepted norms undoubtedly contributes to the variation, but there are other factors; different emphases in training, scarcity ofjournal space, statistical mistakes, and poor writing. The consequences are differences in the completeness, accuracy, and clarity with which empirical research is reported. Treatment regimens and subject characteristics cannot be accurately transcribed by the coder when inadequately reported by the original author. Similarly, methodological features cannot be coded with certainty when research methods are poorly described. The immediate consequence of coder uncertainty is coder error. Smith, Glass, and Miller (1980) recognized the problem and devised guessing conventions as a partial solution. When an investigator was remiss in reporting the length of time the therapist had been practicing, for example, the guessing convention for therapist experience was called on to provide a response. Such a device serves to standardize decisions under uncertainty, and therefore increase intercoder agreement. It is arguable whether it reduces coder error, however, there being no way to externally validate the accuracy of the convention. Furthermore , a guessing convention carries the likelihood of bias in addition to error. Unlike pure observational error, which presumably distributes itselfrandomly...

Share