In lieu of an abstract, here is a brief excerpt of the content:

Cooper, H. and Hedges, L. V. (Eds.) 1994. The Handbook ofResearch Synthesis. New York: Russell Sage Foundation 25 PUBLICATION BIAS COLIN B. BEGG Memorial Sloan-Kettering Cancer Center CONTENTS 1. Background 400 2. Methods for Identifying Publication Bias 401 2.1 Preliminary Analysis 401 2.2 Sample Size 401 2.3 Statistical Significance Test 402 3. Methods for Correcting Publication Bias 404 3.1 Sampling Methods 404 3.2 Analytic Methods 405 3.2.1 The file-drawer method 405 3.2.2 Weighted distribution theory 406 4. Discussion 407 5. References 408 399 400 SPECIAL STATISTICAL ISSUES AND PROBLEMS 1. BACKGROUND The style of reporting the results of a research study in a journal article is governed as much by human nature as by the tradition of scientific objectivity. That is, research studies are commonly reported in an advocacy style. Statistical significance, if it is achieved, may be used as "proof" of a theory. Moreover, the statistical analysis may be subjectively influenced by the use of a variety of statistical tests, excluding certain categories of subjects, performing analyses in selected subgroups, or adjusting the analysis for covariates, all with the goal of presenting the data in such a way as to provide the greatest support for the preferred theory under study. Of course, the fact that the published article does not accurately reflect the true research process is not limited to the purely statistical aspects of the paper, and indeed Medawar (1963) has suggested that most scientific articles are essentially fraudulent in that they systematically misrepresent the process by which the conclusions have been reached. This entire phenomenon is a kind of publication bias, which one might refer to as subjective publication bias, and it is a bias that is well recognized throughout the scientific community. Another widely recognized bias, and one that is very important for meta-analysis, is the one that is induced by selective publication, in which the decision to publish is influenced by the results of the study. This is what meta-analysts refer to as publication bias, and one might refer to it as objective publication bias, since it is the "objective" data reported that are subject to the bias. That is, if one can extract the raw data or relevant summary data from the paper, stripping away the attendant subjective interpretations, these seemingly objective data are still subject to bias owing to the selective publication. How does this bias occur? One way of conceptualizing the problem is to consider the scenario in which a number of investigators are independently conducting identical studies to estimate some effect. After the studies are completed the estimates will differ owing to random variation. The investigator who happened to perform the study that produced the largest effect (i.e., the most significant effect) is the most likely to publish the results. However, selecting the largest of a random sample of estimates will provide a positively biased estimate of the true (mean) effect, and the magnitude of this bias is a function of the sample size of the study and the number of concurrent studies from which the largest estimate is selected. For example, if five identical studies are conducted, each with a sample size of 20, the largest mean effect size is positively biased by 0.26 standard deviations. A study of this phenomenon shows that the magnitude of the bias is inversely related to sample size and positively associated with the number of concurrent studies (Begg 1985). In other words, we should be especially concerned about publication bias in settings in which lots of small studies are being conducted. In many cases the decision to publish will be influenced by the presence or absence of a statistically significant effect, with significant results more likely to be published. Such a phenomenon produces a preponderance of statistically significant publications and increases the chance that any single publication is a false positive. That is, the nominal 5 percent chance of a false positive is an underestimate. Editorial policy can potentially accentuate this problem by discouraging the publication of negative studies (Melton 1962). There have been a number of empirical investigations of the magnitude of publication bias, most of which demonstrate the potentially serious implications for metaanalysis . The following review is not intended to be comprehensive, but merely highlights a few of the studies that have been conducted. For a more detailed exposition, see Begg and Berlin (1988). The earliest studies of publication bias were concentrated in the social science literature...

Share