In lieu of an abstract, here is a brief excerpt of the content:

NOTES INTRODUCTION 1. See Kaestle (1983), chapters 6 and 7, and Cremin and Borrowman (1956), especially Mann’s “Tenth Report to the Massachusetts Legislature .” 2. The term “administrative progressives” is somewhat awkward, since the practices that emerged during the Progressive Era (roughly 1890 to 1920) are no longer viewed as particularly progressive. In particular ,progressive education,defined as the more Deweyan and constructivist pedagogical approaches, and the bureaucratic and efficiency-oriented reforms of the administrative progressives were quite antithetical to one another.The best source on the “administrative progressives ” remainsTyack (1974); see also Callahan (1967). 3. Policy in many countries is driven by narratives, or widely accepted “stories,” about why certain programs are worthwhile.The creation of such narratives typically takes a considerable period of time and many participants. Once widely accepted, policy narratives—like the “Education Gospel” or human capital, the fight against communism or now the war on terrorism–are resistant to change, and subtle empirical evidence (the results that research can generate) is not usually enough to shake the hold of a policy narrative. See, for example, Roe (1994). Policy narratives are similar to the stories that outstanding leaders create (Gardner 1995), except that policy narratives are usually developed collectively rather than by individuals. 4. The rhetoric of the Education Gospel is also dominant in many other countries and international agencies, such as the European Union (EU) and the Organization for Economic Cooperation and Development (OECD); see Grubb and Lazerson (2006). 5. In the landscape of equity discussed in chapter 6, the Coleman Report ’s conception belongs in table 6.1,cell 11,where school resources are equal among groups defined by race,ethnicity,class,or income but not necessarily among individuals. 6. See Hanushek (1989, 1997) for the U.S. literature and Fuller and Clarke (1994) for the international literature. The effects of family background have been summarized by Sirin (2005),who finds income, parental education, and occupation to have similar-size effects. Unlike my discussion in chapter 4, her review is unconcerned with the causal mechanisms underlying these statistical findings. 7. One technical challenge to Hanushek’s discouraging findings has been that a formal meta-analysis (rather than the counting exercise used by Hanushek) is more appropriate. Larry Hedges, Richard Laine, and Rob Greenwald (1994) found more positive results for expenditures per pupil and teacher experience, though the average effect sizes—.0014 and .07, respectively—are still distressingly small. Another challenge has been to come up with “one more study” and to rely on those few studies that do confirm a relation between resources and outcomes— many reviewed inVerstegen (1998) and many drawing on Project STAR inTennessee.Unfortunately,the tactic of“one more study”—unless the study is quite different from prior studies, which is the tactic of this book—leaves the uncertainty associated with older studies intact. For example, the frequent citations of the Tennessee experiments usually fail to mention an earlier random-assignment experiment in Toronto that had a greater range of class sizes,a more transparent randomization procedure, a richer variety of outcome measures, and a more lucid explanation of the results, but that failed to find effects of resources on five of six test scores (Shapson et al.1980).The reason,according to researchers who observed classrooms, was that teachers failed to change their practices.The “battle of the experiments” is at best a draw, but at least it clarifies that understanding the effects of class size reduction requires entering the classroom to see what teachers do. 8. For some of the older studies, see Edmonds (1979) and Clark, Lotto, and Astuto (1984). This approach remains attractive; some newer studies include EducationTrust (Wilkins et al. 2006),American Institutes for Research (2006), andTimar and Kim (2008).The critiques of the effective schools approach include Purkey and Smith (1983), Rowan, Bossert, and Dwyer (1983), Cuban (1984), Cohen (1983). For a rebuttal of the EducationTrust study, see Harris (2006). 9. On the test score gap, see Jencks and Phillips (1998). I take this issue up in chapter 4. 318 Notes [3.146.152.99] Project MUSE (2024-04-26 13:10 GMT) 10. On variation in the PISA reading scale, see Kirsch et al. (2002), tables 4.1 and 4.15. On the mathematical literacy scale, see Organization for Economic Cooperation and Development (OECD 2001), table 3.1. For IALS data, see OECD (2000), table 2.2. The data in figure I.1 come from IALS data on reading and math inequality and from...

Share