In lieu of an abstract, here is a brief excerpt of the content:

159 6 Dealing with the Known Unknowns How Policymakers Should Deal with Dueling Estimates from Researchers Thus far, this book’s analysis of business incentives and early childhood education has ignored that these benefits are uncertain. Business incentive programs are estimated to increase the present value of state residents’earnings per dollar spent by $3.14. Early childhood programs are estimated to increase the present value of state residents’ earnings per dollar spent by $2–$3, with specific dollars-and-cents figures given for each program. But these figures are best estimates. These best estimates are surrounded by considerable uncertainty. How much might uncertainty affect benefits? What are the sources of this uncertainty? How should this uncertainty affect our decisions about adopting these programs? How should uncertainty affect program design? This chapter addresses these questions. I conclude that despite uncertainty, we can move forward with needed program expansions, while designing programs to increase our understanding of what works. SOURCES OF UNCERTAINTY I read the research literature to say that preschool programs can probably make a marked improvement in the lives of disadvantaged children, but that we have only a partial idea of how they should be organized and managed, that is, brought to scale. —Douglas Besharov (2007, p. 3) My conclusion based on [my experience with studies focused on state fiscal policy] is that we are uncertain about the effects of economic development policies, including broad state fiscal policy, on economic growth. —Therese McGuire (1992, p. 458) 160 Bartik We examine the results of some of the programs considered to be early education models—including Perry Preschool, Chicago Child-Parent Studies, Abecedarian, and Head Start—and find the research to be flawed and therefore of questionable value. —Darcy Olsen and Lisa Snell (2006) The upshot of all of this is that on the most basic question of all— whether incentives induce significant new investment or jobs—we simply do not know the answer. Since these programs probably cost state and local governments about $40–$50 billion a year, one would expect some clear and undisputed evidence of their success. This is not the case. —Peter Fisher and Alan Peters (2004, p. 32) As with most social science research, the research findings on early childhood programs and business incentives are viewed as “uncertain,” “disputed,” or “questionable” by some observers. The above quotations give some examples of such views. There is indeed some uncertainty in the research on early childhood programs and business incentives. This uncertainty is sometimes used by critics to argue that the research is “flawed.” For example, the above quotation by Olsen and Snell comes from a report by the libertarian Reason Foundation, in which they give many reasons why there might be uncertainty about the success of early childhood programs in different research studies. Although there is uncertainty in research results, its magnitude is sometimes exaggerated, and such uncertainty is inevitable in any social science research. The uncertainty in estimated economic development benefits of early childhood and business incentives programs has multiple sources. These sources include the following: • Small sample size in some studies • Methodological differences across studies • Problems in identifying causation • Difficulty in observing long-term effects • The use of local labor market models to infer labor market effects • The complexity of defining “quality” • Challenges in generalizing from studies and analyses to new and often broader programs [3.137.221.163] Project MUSE (2024-04-20 03:15 GMT) Dealing with the Known Unknowns 161 The small sample sizes of some studies of these programs makes their estimates more uncertain. This small sample size is particularly a problem for some (not all) studies of early childhood programs. Two of the best random assignment studies, of Perry Preschool and the Abecedarian program, have low sample sizes. The Perry Preschool program had 58 treatment-group children and 65 control-group children. The Abecedarian program had 57 treatment-group children and 54 controlgroup children. These small sample sizes make it surprising that these studies found any statistically significant effects. Statistically significant effects only occurred because some effects were large. The small sample sizes make these studies vulnerable to attack by critics. Critics such as Olsen and Snell can push hard on whether these studies “prove” that early childhood programs work. As another example, the Cato Institute has argued that many of Perry’s effects “disappeared when the scientific standard [of statistical significance] was used” (Schaeffer 2008). (Although some of Perry’s results are not statistically significant, many of its most...

Share