In lieu of an abstract, here is a brief excerpt of the content:

147 4 Meta-Analysis Meta-analysis is a body of techniques for combining many statistical studies to determine an overall result. T. D. Stanley has written extensively on a particularly useful and straightforward technique, metaregression (Stanley 2001, 2005, 2008; Stanley and Doucouliagos 2012; Stanley and Jarrell 1989), as well as two recent applications to the minimum wage (de Linde Leonard, Stanley, and Doucouliagos 2013; Doucouliagos and Stanley 2009). This section begins with a brief description of the technique, drawing heavily on these articles, followed by a discussion of these two recent meta-analyses of minimum wage research, and concludes with our own metaregression analysis of the literature covered in Chapters 2 and 3. When confronted with results from many studies of the same phenomenon , summarizing them or combining them altogether into a single overall result can be a challenge. The first problem is that they must all be measuring the same thing and all must present the results in the same units, or at least in a way that the metaresearcher can put them into the same units. Once past this hurdle, an obvious way to aggregate results is to calculate their average value, and with some complications, this is what metaregression does. The complications arise from recognizing that for a variety of reasons, estimates are not all created equal, and that it is therefore not appropriate to give equal weight to all results in calculating the average. Publication bias, an issue that Card and Krueger (1995) raise in their discussion of the earlier pre-NMWR literature on the minimum wage, is one reason for not treating all results as equally important. Publication bias means that the probability of a paper’s being published depends on the results it reports. It can occur for reasons that are nefarious, such as journal editors’ refusing to publish papers in which results do not toe a party line, or, as is more widely suspected, for reasons that are less so, where a scarcity of journal pages leads editors to reject papers as uninteresting because their results are indeterminate (i.e., not statistically significant) or they are deemed too insufficiently novel or ingeBelman and Wolfson.indb 147 Belman and Wolfson.indb 147 6/16/2014 12:10:17 PM 6/16/2014 12:10:17 PM 148 Belman and Wolfson nious. For whatever reason, attempts to generalize without accounting for publication bias give rise to biased meta-estimates of an effect by overcounting certain results and excluding others. Even absent publication bias, differences in standard errors are another reason for not treating all results as equally important. Imprecisely estimated values are of less value in understanding and evaluating an effect than those that are measured with greater precision (Stanley 2001) and should not be given equal weight in any evaluation. Finally, estimated effects may differ systematically because of differences in statistical framework, data source, data period, unknown and unrecognized actions of particular authors in analyzing the data (Stanley 2001), and others too numerous to mention. Identifying which of these factors are important and accounting for them in the metaanalysis make it possible to understand the source of differences in the estimated values. We can chart the progress of this argument with a series of equations , in the process of which the specific technique of metaregression will become clear.1 We start with a simple average in Equation (4.1): (4.1) Effect Effect u b u k k k     1 , where Effectk is a meta-estimate, an overall estimate of the size of the effect in question. In the case of publication bias for statistical significance , a correlation will exist between the size of the effect and its standard error, standard errork in Equation (4.2): (4.2) Effect b b SE u k k k    1 0 . This equation will remove that form of publication bias from the meta-estimate of the effect size. However, it still treats estimates equally regardless of their precision. The differences in estimates’ precision shows up as heteroskesdasticity. A correction for that is to weight by the inverse of the standard error, which is equivalent to dividing the variables in Equation (4.2) by the standard error, standard errork : (4.3a) Effect SE b SE b v k k k k    1 0 . Belman and Wolfson.indb 148 Belman and Wolfson.indb 148 6/16/2014 12:10:20 PM 6/16/2014 12:10:20 PM [18.222.117.109] Project MUSE (2024-04-25 16:30...

Share