In lieu of an abstract, here is a brief excerpt of the content:

Brookings Papers on Education Policy 2005.1 (2005) 27-41



[Access article in PDF]

Comments

[Article by Tom Loveless]

Comment by Robert M. Costrell

Tom Loveless poses the striking conundrum that just as the social science research is beginning to indicate the promise of test-based accountability, a political backlash is threatening to stop the movement in its tracks or even reverse it. The first part of his paper, on the social science research, provides the promise, and the second part, on political prospects, documents the perils. The paper provides a good overview of where we are and much insightful analysis, if not, in the end, a lot of "hopeful signs."

My comments largely draw on my experience over the past five years as an academic in state government in Massachusetts, one of the few states that has successfully instituted high-stakes testing. Since June 2003, students have been required to pass the English and mathematics tests of the Massachusetts Comprehensive Assessment System (MCAS) exam in order to get a diploma. The passing score is low, but the tests are rigorous. Consequently, a nontrivial number of students have been denied diplomas. The vast majority, however, have received diplomas that now mean something. The Cambridge and Brookline boycotts and the high-priced ads of the Massachusetts Teachers Association, of which Loveless writes, did not carry the day.

Effects on Achievement

Loveless provides a reasonable read of the literature on the effects of standards on achievement. Overall, standards-based reform seems promising, but as it is still early, data are too thin to be definitive. Only a few states have content-based graduation exams (as opposed to the old minimum-skill competency exams). Others have delayed or backed off in one way or another. So it is difficult as yet to gauge the effects of the most rigorous high-stakes testing using [End Page 27] the usual standards of cross-sectional statistical research. If only a few states have high-stakes testing, it is hard to be sure that it is the testing regime, rather than other features of those states, that drives improved performance.

Still, in Massachusetts we are encouraged by what is at least a happy coincidence—although we have reason to believe it is more than that—between our high-stakes testing regime and the strong performance of our students, both in levels and improvement, on a variety of external tests, including the National Assessment of Educational Progress and Scholastic Assessment Test.69 We have also generated great improvement on the MCAS test itself, a set of exams that is widely respected (for example, by the evaluations of Achieve) and not easily gamed.

These results are consistent with a huge amount of qualitative intelligence on how the MCAS has changed practices on the ground, especially in the urban schools. There has been renewed focus on academic achievement in many concrete ways, at least in English and math, including double-block scheduling, increased writing assignments, greater emphasis on problem solving in math, and improved use of data to identify student weaknesses.70 Based on both the sense on the ground and the data, there is broad agreement among reformers in Massachusetts that although we are still far from achieving the goal of proficiency for all, high-stakes testing has been a key element in raising achievement. This is the view not only of those who had pressed for high stakes all along but also among some who were getting cold feet as the moment of truth approached. There is little doubt among the urban superintendents (some of the strongest proponents of standards-based reform) that the mobilization for such improvement could not have occurred with lesser forms of accountability, such as school report cards. Loveless reports that Margaret Raymond and Eric Hanushek find no statistical difference between states with school report card systems and those with stronger forms of school accountability, but the study did not examine systems of student accountability.71

Unintended Consequences

Much of the evidence that Loveless reviews on the effect of standards on dropout rates refers to an earlier generation of standards...

pdf

Share