In lieu of an abstract, here is a brief excerpt of the content:

CHAPTER 3 Making the Grade (or Not) Success and Failure in NCLB’s World Gaps in school achievement, as measured, for example, in the eighth grade, have deep roots—deep in out of school experiences and deep in the structures of schools. Inequality is like an unwanted guest who comes early and stays late. Paul E. Barton, Parsing the Achievement Gap On a cool August day at the 2003 Minnesota State Fair, Republican governor Tim Pawlenty helped a South St. Paul ‹fth-grader named Jeremy look up his school’s brand-new report card on the Internet. Calling the program Accountability on a Stick—in reference to the fact that people attending the Minnesota State Fair can ‹nd almost anything fried and stuck on a stick—Pawlenty praised the report cards and the positive effects that they would have on accountability for Minnesota’s public schools: Traditionally, the measure of our commitment to schools has always been just, “How much are we spending?” That’s a good and important measure, but it’s an incomplete measure. We also want the measurement to be, “What are we getting for the 46 money? What are we getting in terms of student learning and performance and accountability?”1 The report card for Jeremy’s elementary school included summaries of student test scores, demographics, teacher quali‹cations, and—at the centerpiece of the initiative—a rating of between one (worst) and ‹ve (best) stars. The foundation for these star ratings—ful‹lling No Child Left Behind’s requirement to publicize achievement—was a school’s success or failure to make adequate yearly progress (AYP).2 In this, the system’s ‹rst year, only elementary and combined elementary/middle schools were rated. Stars were assigned in both reading and math, based on the results of third- and ‹fth-grade achievement tests. For the ratings, schools were compared to other, similar schools based on size and percentage of students qualifying for free or reduced-price lunch. The star ratings were normalized, meaning that only the top performing schools within similar comparison groups could attain the highest ratings. Schools that failed to make AYP in any area could do no better than two stars, regardless of how well they did on the test results for any other grade or subgroup. In this pilot year, Jeremy’s South St. Paul School received three stars, as did the vast majority of Minnesota’s elementary schools. This pattern held true the following year, when middle and high schools were incorporated into ratings system. The result was largely by design, since high ratings were awarded on a competitive basis. “Schools,” noted the president of the Minnesota teachers’ union, “are graded on a curve, and therefore it would be statistically impossible for every school to perform at the top.”3 Even in Minnesota, not all schools are allowed to be above average. In many other ways, Jeremy’s school was average as well. It enrolled roughly the same percentage of white students (84 percent) as Minnesota’s public schools as a whole (81 percent), slightly fewer African American students, and slightly more Hispanic students. Just under a third of the students were eligible for free or reduced-price lunch, slightly higher than the state average. So why did Jeremy’s school receive only an average rating? His school might have represented a snapshot of average Minnesota, with Making the Grade (or Not) 47 [3.145.166.7] Project MUSE (2024-04-24 20:25 GMT) its test scores re›ecting more Jeremy’s peers and the level of community resources than anything that the schools were or were not doing. It could also be that Jeremy’s teachers and principals have been performing up to a decent standard, but nothing more. Or, it could be neither of these, or some of both. The ratings offer little guidance, especially for those schools that failed to make AYP and therefore received no more than two stars. Given the critical role played by student and community characteristics in educational production (as discussed in chapter 2), it is very dif‹cult to extract the performance of any school from the sociodemographic characteristics of its student body. This is equally true in looking at test scores, success or failure to make AYP, or any list of “blue ribbon” schools based on the results of a state’s standardized tests. This chapter begins the book’s empirical analyses, offering a detailed look at the relationships among community characteristics...

Share