Brookings Institution Press
Lawrence W. Sherman - The Safe and Drug-Free Schools Program - Brookings Papers on Education Policy 2000 Brookings Papers on Education Policy 2000 (2000) 125-156

The Safe and Drug-Free Schools Program

Lawrence W. Sherman

[Comment by Christina Hoff Sommers]
[Comment by Bruno V. Manno]
[Tables]

What is the best relationship between knowledge and democracy? The Calvinists who founded America thought democratic processes should set goals, while knowledge specialists should decide how to accomplish those goals. But in an increasingly antinomian (literally "against law") age, many Americans view knowledge itself as a democratic commodity, in which citizens are entitled not only to their own opinions, but also to their own facts. That is, every citizen is entitled to decide what knowledge means and what kind of evidence constitutes knowledge of cause-and-effect relationships. This view is manifest in the widespread resentment of elite knowledge as a basis for policymaking and the reverence for intuitive inspiration of the people "in the trenches" who must face problems on a daily basis. In the terms of Digby Baltzell, it is a matter of Puritanism versus Quakerism, of objective erudition versus the unschooled subjective inspiration of the "inner light." 1

This conflict lies at the core of the new federalism. Who knows best how to fix the nation's problems: the knowledge elites of the federal government in Washington or the grass-roots leaders of local government? Since the administration of Lyndon B. Johnson in the 1960s, elected officials [End Page 125] of both parties have increasingly chosen the latter. As public confidence in Washington has plummeted, more power to spend federal funds has been passed down to state and local leaders. Substantial evidence exists that Americans have more confidence in their local governments than in Washington to fix public problems. 2 But scant evidence is available that local officials can achieve more success than federal officials. In coming years, the evidence on the question of achievement will begin to accumulate on a wide range of programs, from welfare reform to medicaid.

One early result is found in the Safe and Drug-Free Schools program. As a crucial test of grass-roots control, the news is bad. Since 1986, this program has given more than $6 billion to some fifteen thousand local school districts and fifty state governors to spend largely at their own discretion. No evidence shows that this half-billion-dollar-per-year program has made schools any safer or more drug-free. However, much of the money has been wasted on performing magicians, fishing trips, and school concerts--and on methods (such as counseling) that research shows to be ineffective. Both the Office of Management and Budget and the Congressional Budget Office have tried to kill this program. Yet both Republican and Democratic presidents have joined with opposition parties in Congress to keep the program alive.

This paper explores the causes of, and alternatives to, the democratized waste of the Safe and Drug-Free Schools funding. The causes are linked to the politics of "symbolic pork," or the spending of money on problems without needing to show any outcome from previous spending. This paper documents that claim with respect to the Safe and Drug-Free Schools program and then considers alternative ways to restructure the program to increase its effectiveness. One alternative is a Food and Drug Administration (FDA)-style, Washington-driven program based on the best knowledge available nationwide. Another is a local accounting model, in which every community develops performance and results measures for every expenditure. A third alternative, which I define as "evidence-based government," combines the best national knowledge with the best local outcome measures in a participatory process of accountability for risk-adjusted, value-added results. 3

Whether a Washington-led program of research-based best practices for school safety could make schools any safer is hard to say. The idea of a federally approved menu of practices of proven effectiveness is similar [End Page 126] to a Food and Drug Administration, testing all policies for their safety and effectiveness. Yet applying such a menu on a national scale presumes both the resources to support sufficient research and the generalizability of research results from the test communities to all or many other communities. Congress has never appropriated funds for the former, and many Americans refuse to believe the latter. Local leaders clearly prefer knowledge based on "our town" rather than on someone else's town, on the premise that every community is unique.

That premise suggests the local accounting model, in which each community invests in measurement of the impact of its federal expenditures. This approach is best exemplified by the "reinventing government" philosophy of the Government Performance and Results Act (GPRA) of 1994, which calls on all government agencies at the federal level to name their criteria for success and then report on how well they meet those criteria. 4 Under this model, school safety, or even achievement test scores, could be compared across different policies to find the most effective way to accomplish each goal. Trends in outcomes before and after the introduction of new policies may provide some clue as to their success. But this approach yields very weak evidence on cause and effect. Moreover, it disregards the huge differences in the level of risk of crime--and of academic failure--that are found from one school to the next, or one school district to another.

This paper proposes a model of evidence-based government that draws on both national and local evidence to compare schools with their expected performance outcomes, given the social context in which their students live. This is arguably the only fair way to compare outcomes across units of government and to show the "value-added" difference that each unit can make with its raw material. By comparing the difference schools make with their students, and not just the qualities students bring to school, federal programs can help reward the best practices that each school can undertake in its own context.

The Safe and Drug-Free Schools and Communities Act is part of a larger group of programs that arguably constitute symbolic pork. These programs differ from traditional pork barrel funds, which bring jobs or tangible benefits such as construction projects to one congressional district at a time. Symbolic pork puts money into every congressional district to symbolize federal concern about a problem, regardless of what effect the money has--or how small the amounts of money may be. Each member [End Page 127] of Congress gets to say that he or she has voted for this program whenever constituents complain about crime or drugs in schools. By this logic, no member of Congress could ever vote against such funding, because that vote could imply indifference to problems of school safety and drug abuse. Nor can members easily vote to limit grass-roots control of the money, after a decade of predictable funding.

Such spending can even be "symbolic" rather than tangible in terms of the choice of problems it addresses. By choosing problems based on subjective concerns of the voters, instead of on the basis of objective knowledge, Congress may spend money on nonexistent or minor problems--government by anecdote as opposed to government by analysis. If voters were aroused by several blizzards, for example, despite evidence of declining snowfall nationwide, the passage of a national blizzard prevention program would constitute symbolic pork.

This analysis uses the metaphor of the Food and Drug Administration to argue for national evidence-based government. If there were an FDA for schools, what would it say about school violence and drugs? Guided solely by the best evidence available, what would one conclude about the severity and shape of the problems? Given the shape of the problems and the available policy evaluations, what would be the best policy? If an effective program were designed to be run from Washington based on the best national knowledge available, what would it look like? On what principles would the resources be allocated, how would what works be determined, and how would specific prevention methods be selected?

Problems and Solutions: Evidence-Based Analysis

Most schools are safe, although few are drug-free. The causes of violence and drug abuse are largely external to the schools themselves, although school management can make a moderate contribution to preventing those problems. Substantial research evidence suggests that putting the right kinds of programs into the small number of high-risk schools could succeed in making schools somewhat safer and more drug-free. While far more research is still needed, some old-favorite methods are known to be ineffective.

The rare problem of school violence is heavily concentrated in a small number of schools in urban poverty areas. Fixing the problem in urban [End Page 128] schools would largely solve the nation's school violence problem. Mass murders have increased slightly in recent years--including the rare occurrences in nonurban schools--but overall rates of violent injury of high school students have remained virtually unchanged since 1985.

Drug abuse is more widespread than violence, but only moderately linked to schools. Most students who use drugs do so off school property. Schools are more commonly used for exchange of drugs than their consumption. Marijuana use by high school seniors has fallen and risen since 1985, cocaine use has fallen, and hallucinogen use has risen.

The causes of these problems are mostly beyond school walls, and schools at best can have moderate effects on them. But good evidence exists about how those effects can be achieved.

The Problems

While the problems of violence and drug abuse are different in their geography, neither of them has changed much in the past decade.

violence in schools. On average, American schools are among the safest places on Earth. While the number of mass murder incidents nationwide rose from two in 1992-93 to six in 1997-98, the overall murder rate has always been far lower in schools than in environments outside schools. In 1992-94, the murder rate for children in schools was less than 0.45 per 100,000 person-years. 5 The overall U.S. homicide rate in those years was about 9 per 100,000, or twenty times higher than the rate in schools. 6 For children outside of school, the murder rate was more than 20 per 100,000. 7 Thus American children are, on average, forty-four times more likely to be murdered out of school than they are in school. Moreover, they are far safer sitting in American schools than they are living in low-homicide countries such as Australia, England, and New Zealand. 8

Yet not all children are created equal in their risk of being murdered, either in school or out. School violence, like serious violence in general, is heavily concentrated in highly segregated neighborhoods where most adults are out of the labor force. 9 Homicide rates in some urban neighborhoods reach 180 per 100,000, or twenty times the national homicide rate and almost four hundred times the national risk of murder in school. 10 Fully 90 percent of all 109,000 schools nationwide report not one serious violent incident in a year. But 17 percent of schools in cities report [End Page 129] at least one incident, compared with 11 percent on the urban fringe, 8 percent of rural schools, and 5 percent of schools in small towns. 11

High schools and middle schools carry most of the risk of violence. In 1996-97, 21 percent of high schools and 19 percent of middle schools reported at least one serious violent event; only 4 percent of elementary schools did. This difference also tracks the age structure of serious violence outside of school.

Schools are more dangerous for teachers than for students. While students are victimized by serious violent acts at the rate of about 10 per 1,000 per year, teachers face more than twice that rate. Teachers in urban schools are victimized at the rate of 39 incidents per 1,000, about double the rates of 20 per 1,000 for teachers in suburban schools and 22 per 1,000 in rural schools. 12

The rate of violence against students in schools has remained remarkably constant over the past fifteen years, despite the national doubling in the overall juvenile homicide rate during that time. 13 That conclusion is evident in the prevalence of high school seniors who reported to the annual University of Michigan survey that in the past twelve months, while they were "at school (inside or outside or on a schoolbus)," they had been injured with a weapon "like a knife, gun or club." While table 1 shows the disproportionate concentration of injuries among black seniors (which would be even greater absent the higher dropout rate among inner-city [End Page 130] students), it also shows virtually no substantial change in rates of violence since the program started in 1987.

These rates of injury may seem high to most adults. But they are comparable to national rates. In 1995 the national rate of victimization by all violent crime was 5 incidents per 100 people. For crimes of violence with injury, the national rate for all ages was 1 incident per 100, but for persons aged sixteen to nineteen it was 3.3 per 100. 14 The shape of the violence problem in general is that it is heavily concentrated among young men and has been for centuries--both in and out of school. 15

Public perceptions of the school violence problem may be driven less by these rates than by anecdotal evidence. The national concern over mass murders in schools clearly increases the perception of all schools as dangerous, at-risk environments. But from a policy perspective, the shape of the mass murder problem is a needle in a haystack. Predicting where such incidents will occur is virtually impossible, despite the tendency to have 20/20 hindsight about the predictability of each event after it has happened. 16 From a political perspective, extreme cases reaffirm the need for a program to fix the problem of unsafe schools, regardless of how safe they are in any objective sense.

drug use in schools. Drug use in schools appears to be more prevalent and more widespread than violent crime but is still limited to a small fraction of all students. More than 91 percent of all high school students, and some two-thirds of current users of marijuana, say they do not use marijuana on school property. More common is acquiring drugs on school property. For 1995, the Centers for Disease Control Youth Risk Behavior Surveillance System (YRBSS) reports that 32 percent of all high school students have been offered, given, or sold an illegal drug on school property. This figure varies little by race: 31.7 percent for whites, 28.5 percent for blacks, and 40.7 percent for Hispanics. 17

These data do not provide separate estimates for inner-city schools, so the shape of the drug problem cannot be directly compared with the shape of violence. However, the analysis of where children are at risk for these problems, and whether schools are above or below the average risk for any location in their communities, can be replicated. While some 42 percent of students claimed to have used marijuana at least once in their lifetime, and 25 percent report current use, only 8.8 percent report current marijuana use (any time in the last thirty days) on school property. [End Page 131] This latter figure also varies little by race, at 7 percent for whites and around 12 percent for blacks and Hispanics.

Overall the data suggest that schools may be largely drug-free even when their students are not. The data on drug transfers suggest that schools may be more a marketplace for drugs than a place for their consumption. That might still be a damning indictment if students could get drugs only at school. The widespread availability of drug markets outside school suggests that drug-free schools might never create drug-free students. Even so, availability of drugs at school does make some difference. Controlling for individual propensity to use drugs, individual decisions to use drugs increase when more students in a school say that drugs are easy to buy there. 18

Surveys of high school seniors conducted each year since 1985 show how few changes have occurred in their drug use over the most recent twelve months since the advent of the program in 1987 (see table 2).

The Causes

The causes of youth violence and drug abuse in schools have only a modest connection to how schools are run. The fact that most youth violence occurs outside schools suggests that schools do a good job of protecting students against violence for seven hours a day. The best predictor of the safety of a school is the safety of its neighborhood. 19 Once the [End Page 132] effect of neighborhood violence rates is controlled, little (although some) variability remains in the safety of each school. Only some of that variability can be explained by how the schools are run. Smaller schools are safer than larger schools. Schools with a sense of community and strong administrative leadership are safer than schools that lack these characteristics. It may be easier to create a sense of community in smaller schools, but size is only one factor in school climate.

The pessimistic view of the high correlation between community problems and school problems is compositional: The composition, or kinds of students each school has, determines its level of violence and drug abuse. Much like the conclusions of research on family factors in determining educational achievement, this view says that family and background factors of students shape the school safety climate and overwhelm good educators. 20 This argument concludes that it is futile to modify schools if the community is the prime source of school problems; modifying communities and their families would work far better and would naturally improve the schools.

Good evidence against that view comes from Gary and Denise Gottfredson's 1985 analysis of 1976 crime data from more than six hundred secondary schools. 21 This study measured characteristics of communities, students, and schools. Schools were measured by interviews with students, teachers, and principals. Their analysis shows that community structural characteristics (such as rates of unemployment and single-parent households) and student compositional characteristics (such as the number of parents in each student's home) were so highly correlated that they could not be separately estimated. Even after controlling for these characteristics, however, school climate still varied--and had a clear effect on rates of victimization in schools. For junior high schools, community factors explained 54 percent of the variance in victimization of teachers, but school factors explained an additional 12 percent. Community factors explained only 5 percent of the variance in junior high school victimization of students, while school effects explained 19 percent. Thus depending on the measure, school effects can be even greater than compositional or community effects on junior high crime rates.

School effects are somewhat weaker for senior highs, but still important. Community factors explain 43 percent of the variance in teacher victimization rates, while school factors explain an additional 18 percent of the variance. For student victimization rates, community factors explain [End Page 133] 21 percent of the variance, while school effects explain another 6 percent of the variance.

If school characteristics matter, which ones affect rates of crime the most? The Gottfredsons found that three general factors may cause less crime: size and resources, governance, and student socialization. Specifically, schools have less teacher victimization, independent of community context, when they have

--More teaching resources,

--Smaller total enrollment (junior highs) or smaller student-teacher ratios (senior highs),

--More consistent and fair discipline,

--Less democratic teacher attitudes toward parent and student control (junior highs only),

--Less punitive teacher attitudes,

--More teacher-principal cooperation (senior highs only), and

--Higher student expectations that rules will be enforced (junior highs), and commitment to conventional rules (senior highs).

The Gottfredsons found similar factors also affect rates of student victimization, especially perceptions of the fairness and consistency of school discipline. 22

Despite the independent effect of school factors on crime, school management is highly correlated with community characteristics. The most disorganized schools are found in the most disorganized communities. Does this mean that schools cannot be improved to reduce crime? No. But it does reflect the size of the challenge facing any policy trying to produce that result. That challenge can be met more effectively on the basis of experimental and quasi-experimental research that has compared a wide range of different strategies for enhancing school capacity to prevent crime and drug abuse.

The Solutions

How should the United States spend $500 million per year to foster safe and drug-free schools? Note that the question is not how much, if 0y, money to spend on this objective. Evidence-based government could help rank the relative importance of different issues, and even help allocate resources among them. Yet those decisions are increasingly the result of poll-driven politics, which is just another technology in the long history [End Page 134] of democracy. 23 Criminologists might argue for redirecting the money to a general reconstruction of community, family, housing, and labor markets in small areas of the fifty-four cities producing more than half of all homicides in America. 24 But that option, for now, is off the table. If the money is inevitably to be spent on schools, the best evidence can still be used to design the best program. Doing that requires matching resources to risks, learning what works, and crafting policy from evidence.

MATCHING RESOURCES TO RISKS. The evidence shows highly uneven risks of violence, with most school violence found in a small percentage of schools. The evidence is less clear about the concentration of drug abuse. Thus it may make sense to split the efforts for controlling drugs and violence. This requires some criterion for weighing the relative importance of the two problems. One criterion is cost to the taxpayers. An estimated $20,000 in medical costs results from each nonfatal gun injury, most of which is borne by taxpayers. The number of drug-related auto accidents or violent crimes is much harder to estimate. But a 50-50 split between the two problems is probably as good an estimate as any.

Where should this country spend $250 million annually to foster safer schools? The evidence suggests the money should be put where the crime is, concentrating most of the funds in the schools with most of the violence, generally located in urban poverty areas. This strategy is made easier by the relative lack of resources in many of the most dangerous schools. The evidence on how to allocate the funds would have to be gathered carefully, to ensure that schools do not increase crime reporting just to get more money. Police records on neighborhood crime rates might be a better source of data.

Where should this country spend $250 million annually to foster drug-free schools? The evidence suggests that this objective requires far broader distribution of funds than for violence prevention. Nonetheless, an ample literature exists on the inequitable support for education across school districts. If federal funds are to make the most difference, a question still arises of whether all school districts should be funded equally per student, or on some measure of risk given each school's constraints. The "old" federalism would require extensive paperwork to demonstrate each school's need and a comprehensive proposal for federal officials to review. But that is what grass-roots solutions reject. Using measures of drug use, either the National Institute of Drug Abuse (NIDA) or the President's [End Page 135] Office for Narcotics and Drug Control Policy (drug czar) could assign a risk level to every one of the fifteen thousand school districts in the nation and create three levels of risk: high, medium, and low. Then, for example, 60 percent of the funds should be assigned to the high-risk districts, 30 percent to the medium-risk districts, and 10 percent to the low-risk districts.

In a political process, resources can rarely be matched to risks, especially when the need is greatest among those with the least political power. But whatever principle is used to allocate resources across schools, the next question is how to spend the money in each school. Grass-roots political theory says each school or district should make that decision, without Washington telling it what to do. Evidence-based government says whoever makes the decision should do it based on good evidence. But doing that requires a clear definition of terms for learning what works: what constitutes good evidence.

LEARNING WHAT WORKS. Three schools of thought about evaluation research have emerged in recent years: the mainstream evaluation community, program advocates who reject the legitimacy of external evaluation, and antinomian critics of the scientific method. Each group has its own view about how to learn about what works. Yet all three agree that once what works has been determined, more of it should be done.

Mainstream evaluators continue to believe that good science and reliable measurement can reveal more about cause and effect than the opinions of people delivering the programs. This group, which includes myself, continues to press for randomized field trials, multisite replication, testing and refinement of microprocesses, and theory-based programs. Many of this group would prefer a combination of qualitative process evaluations and controlled impact analysis, although they are often accused of caring only for the latter. Their view of how to scale up from pilot to national program is cautious, with a preference for an incremental process of testing at each level of larger scale.

Program advocates learn what works from personal experience. They make things happen with remarkable success, overcoming obstacles that might restrain the growth of their programs. Evaluation is one such obstacle. They would not work so hard for their programs if they had any doubt as to the program's benefits. That viewpoint inevitably makes evaluation at best a distraction, and at worst a threat. Advocates often ask elected officials to observe their programs firsthand, talk to staff and clients, [End Page 136] hear the testimonials, and feel the enthusiasm. That method of evaluation, to them, is a far more reliable indicator of success than whatever statistics might show, because statistics can show anything. Both this viewpoint and the evaluators' have been around for decades, and both are predictable. 25

The newest school of thought may be called the antinomian critics of the scientific method. Lisbeth B. Schorr is an articulate exponent of this viewpoint, which stresses the difficulty of placing comprehensive, flexible programs into a controlled test. 26 The basic argument is that variability is essential to program success, but inimical to controlled testing. Therefore, controlled tests should be abandoned in favor of less rigorous research designs. Low internal validity designs are the only research possible for the kind of multitreatment, comprehensive, one-size-does-not-fit-all interventions that are needed. Tom Loveless has applied this perspective to education policy research, arguing against anyone trying to define "best practices" based on research results. He also stresses the responsiveness of each teacher to each student, arguing against research-based policy, which by definition constrains that virtue. 27

From the perspective of mainstream evaluation, the antinomian view confuses the limitations of inadequate research funding with inadequate methods of science. The primary reason that variability within treatment group is problematic for evaluation is limited research funding. With larger sample sizes, more resources for consultation with practitioners, and other resources, the scientific method can use controlled tests of many variations and combinations of strategies. What evaluators call "Solomon" designs, with ten or twenty different treatment groups in a randomized comparison, can be taken out of the laboratory and put into field tests, given enough money and time to enlist the partnership and commitment of teachers. With 14,000 police agencies, 15,000 school districts, 109,000 schools, and more than 1,000,000 classrooms, more than enough cases are available for analysis. Even in big cities, where the numbers of governments get smaller, the number of contact points remains enormous.

Evidence-based government takes its inspiration (and its name) from evidence-based medicine. 28 That nascent field faces similar debates between evaluators, doctors, and antinomian critics of randomized trials. Yet medicine persists in seeking elegant simplicity for clarification of evidence, with a five-point scale of the strength of each study supporting [End Page 137] each choice of medical treatment. 29 Similarly, the University of Maryland's Department of Criminology and Criminal Justice recently employed a five-point scale to rank the strength of evidence from each evaluation of crime prevention practices. 30 This scale was employed in a congressionally mandated review of the effectiveness of the $4 billion in state and local crime prevention assistance administered by the U.S. Department of Justice. The law required that the review "employ rigorous and scientifically recognized standards and methodologies." 31

Following that mandate, the Maryland report defined its scientific methods scale as follows:

Level 1: Correlation between a crime prevention program and a measure of crime or crime risk factors at a particular time.

Level 2: Temporal sequence between the program and the crime or risk outcome clearly observed, or a "comparison" group present without demonstrated comparability to the treatment group.

Level 3: Before-after comparison between two or more units of analysis, one with and one without the program.

Level 4: Before-after comparison between multiple units with and without the program, controlling for other factors, or a with a nonequivalent comparison group that has only minor differences evident.

Level 5: Random assignment and analysis of comparable units to program and comparison groups.

Using this scale, the report then classified all crime prevention programs (defined as local methods, not federal funding "streams") for which sufficient evidence was available. The categories were what works, what doesn't work, and what's promising. Any program that did not meet the following standards was left in the residual category of what's unknown.

What works. These programs are reasonably certain to prevent crime or reduce risk factors for crime in the kinds of social contexts in which they have been evaluated, and for which the findings should be generalizable to similar settings in other places and times. Programs coded as "working" by this definition must have at least two level 3 evaluations with statistical significance tests and the preponderance of all available evidence showing effectiveness.

What doesn't work. These programs are reasonably certain to fail to prevent crime or reduce risk factors for crime, using the identical scientific criteria used for deciding what works. [End Page 138]

What's promising. These are programs for which the level of certainty from available evidence is too low to support generalizable conclusions, but for which some empirical basis exists for predicting that further research could support such conclusions. Programs are coded as "promising" if they were found effective in at least one level 3 evaluation and the preponderance of the evidence.

What's unknown. Any program not classified in one of the three above categories is defined as having unknown effects.

The weakest aspect of this classification system is that no standard means is available for determining which variations in program content and setting might affect generalizability. In the current state of science, that can be accomplished only by the accumulation of many tests in many settings with all major variations on the program theme. None of the programs reviewed for the Maryland report had accumulated such a body of knowledge. The conclusions about what works and what doesn't work should therefore be read as more certain to the extent that the conditions of the field tests can be replicated in other settings. The greater the differences between evaluated programs and other programs using the same name, the less certain or generalizable the conclusions of any report must be.

In her chapter of the Maryland report, Denise C. Gottfredson reviewed available evidence on the programs designed to reduce violence and drug use in schools. Her work was not an evaluation of the Safe and Drug-Free Schools program, but its results can be summarized as a basis for an evidence-based program to accomplish those goals.

what works in prevention. Given the research on the causes of the problems of violence and drugs at school, most effective programs not surprisingly treat the whole school and do not just supplement the curriculum. Building on social organization theory, these programs have taken the holistic approach that all aspects of school life can affect violence and substance abuse. Whether school starts on time, for example, can affect student perceptions that discipline is fair and consistent, which in turn can affect the level of crime and drug abuse. The specific conclusions Gottfredson reached follow.

What works. Building school capacity to initiate and sustain innovation through the use of school "teams" or other organizational development strategies works to reduce delinquency and is promising for reducing substance abuse. 32 [End Page 139]

--Clarifying and communicating norms about behavior through rules, reinforcement of positive behavior, and schoolwide initiatives (such as antibullying campaigns) reduce crime and delinquency and substance abuse. 33

--Social competency skills curricula, such as Life Skills Training (LST), which teach over a long period of time such skills as stress management, problem solving, self-control, and emotional intelligence, reduce delinquency and substance abuse. 34

--Training or coaching in "thinking" skills for high-risk youth using behavior modification techniques or rewards and punishments reduces substance abuse. 35

What doesn't work. This list includes some of the most popular attempts at prevention that have been developed and promoted by strong advocates. These are in widespread use in schools, both with and without federal funding. They are based on what appear to their advocates to be reasonable theories and produce strong anecdotal evidence. But they all fail to show prevention effects in at least two studies at the Maryland scale level 3 or higher:

--Counseling and peer counseling of students fail to reduce substance abuse or delinquency, and can even increase delinquency. 36

--Drug Abuse Resistance Education (D.A.R.E.), a curriculum taught by uniformed police officers primarily to fifth and sixth graders in more than seventeen lessons, has virtually no effect on prevention of drug abuse. 37 Available evaluations are limited to the original D.A.R.E. curriculum, which was modified slightly in 1993 and again in 1998, now extending from K-12 in Los Angeles.

--Instructional programs focusing on information dissemination, fear arousal, moral appeal, self-esteem, and affective education generally fail to reduce substance abuse. 38

--Alternative activities and school-based leisure time enrichment programs, including supervised homework, self-esteem exercises, community service, and field trips, fail to reduce delinquency risk factors or drug abuse. 39

What's promising. The following programs have only one level 3 or higher study showing that they work, but no studies of that strength showing that they do not work:

--"Schools within schools" programs (such as Student Training through Urban Strategies or STATUS) that group students into smaller [End Page 140] units for more supportive interaction or flexibility in instruction have reduced drug abuse and delinquency. 40

--Training or coaching in "thinking" skills for high-risk youth using behavior modification techniques or rewards and punishments may reduce delinquency and are known to work to reduce substance abuse. 41

CRAFTING POLICY FROM EVIDENCE. Three decades ago under the old federalism, a highly trained civil servant in Washington might have taken this list and offered funding to schools that could propose plausible plans for replicating one or more of the programs that work. Each proposal would have been carefully reviewed, and perhaps regional federal officials might even have visited each site. If the program was not implemented as planned, some attempt might have been made to cut off the funds, but an appeal to a member of Congress might have stopped that quickly.

Under the new federalism, the law essentially limits civil servants in Washington to writing a check and enclosing with it a manual of recommended programs. The premise is that no one in Washington is close enough to local conditions to decide what kinds of programs are most appropriate for any given locale. While that may be true, proximity alone may not lead to the right answer. Local officials may have more information, but they may also be more susceptible to the enthusiasm of advocates selling what has proven to be snake oil.

Antinomian critics of the list of what works and what doesn't work will cite the uncertainty about generalizability of the results. So do the evaluators. Gilbert Botvin, the inventor of Life Skills Training--the most effective (but by no means the most widely used) drug prevention curriculum--examined the variability in the quality of implementation after teacher training. He found that the percentage of curricular materials covered in the classroom varied widely from school to school, from 27 percent to 97 percent, with an average of 68 percent. Only 75 percent of the students were taught at least 60 percent of the required content. Most important, the level of implementation directly affected results. When less than 60 percent of the program elements are taught, the program fails to prevent drug abuse. 42

This "flexibility" of committed teachers is what the antinomians wish to preserve. Lisbeth Schorr, for example, objects to a McDonald's restaurant kind of formula for ensuring consistency across programs--largely on the empirically testable grounds that it cannot be delivered, but also on the grounds that theory-based flexibility will work better. Botvin's evidence [End Page 141] partly falsifies the latter claim but may support the former claim. It is not clear that the means are yet available to ensure proper implementation of the programs that work, even if funding could be limited to such proven programs.

Lacking the means to ensure fidelity does not mean that they cannot be provided. Just as research can show what works and what doesn't work to prevent crime, it can also learn what works in program implementation. Here again, the limits of the scientific method could be confused with the current limits in funding. With adequate investment in the research and development (R&D) effort to learn how to implement effective programs, evidence-based teaching and evidence-based school leadership could be fostered in ways that reduce violence and drug abuse.

Had $6 billion been spent on such an R&D effort over the past twelve years, an effective means of encouraging grass-roots adoption of effective practices may have been developed by now. Instead, the $6 billion was given to local officials to spend any way they wanted. The results are not encouraging.

Symbolic Pork: The Return on Investment

The Safe and Drug-Free Schools program is based on two key principles. One is that everyone should get an equal share of money per student, regardless of need. The other is that interference from Washington should be minimal. Ironically, federal officials are the first to be blamed for any local program failures. A Los Angeles Times exposé in 1998 documented such failures extensively. Yet in his 1999 State of the Union address, President Bill Clinton received bipartisan applause when he called for continuing the program.

The Shape of the Legislation

The Safe and Drug-Free Schools and Communities Act, first enacted in 1986, was most recently reauthorized in 1994 under Title IV of the Elementary and Secondary Education Act. 43 The law divides the available funds on the basis of the number of students in each state. It gives 20 percent of each state's funds to governors to award as grants. The 80 percent balance of the money is allocated to each school district on the basis of enrollment. [End Page 142]

The result of this formula is to spread the money thinly across the 14,881 school districts in the country, most of which participate in the program. Six out of every ten school districts receive $10,000 or less each year. Small districts may receive only $200 or $300, which does not even cover paperwork processing time. The Greenpoint Elementary School District in Humboldt County, California, received $53 in 1997 for its twenty students. 44

Large school districts, in contrast, can spend the money on substantial administrative costs. The Los Angeles Unified School District received $8 million from the program in 1997 for its 660,000 students. It spent $2.2 million--28 percent--of the funds on the program's administration, including a $1,000 bonus to teachers who serve as program coordinators at each school.

School districts are also authorized to spend up to 20 percent of their funds for security measures, such as metal detectors or security guards. Schools in communities as safe as State College, Pennsylvania, have followed this suggestion in recent years, assigning police or guards to patrol the schools. So did Columbine High School in Littleton, Colorado, although to no effect in preventing a mass murder.

Administrative Rulemaking

In response to the March 1997 University of Maryland report on preventing crime, the U.S. Department of Education in July 1997 proposed revised guidelines to make the program more evidence-based. The proposed rules tried to limit the funding to activities for which some research showed effectiveness. These guidelines originally proposed that each state or school district "design and implement its programs for youth based on research or evaluation that provides evidence that the programs used prevent or reduce drug use, violence, or disruptive behavior among youth." 45 Yet, in the politics of the new federalism, even this proved too tough a standard to impose.

In June 1998, the department summarized the comments received on the proposed principles and published its final "nonregulatory guidance for implementing SDFSCA [Safe and Drug-Free Schools and Communities Act] principles of effectiveness." The comments indicate a strong grass-roots reaction against an attempt to invoke evidence-based government. The final rules show the compromises the Department of Education [End Page 143] made to preserve the symbolism of evidence-based government without much reality: 46

Comments: Several commentators noted the lack of research-based programs in drug and violence prevention that meet local needs. One of those commentators stated that the high standard imposed by the SDFS [Safe and Drug-Free Schools] Principles of Effectiveness would create a "cartel" or monopoly since very few programs can meet the standard established.

Discussion: While a significant body of research about effective programs that prevent youth drug use and violence exists, even more needs to be done to identify a broader group of programs and practices that respond to varied needs.

Changes: Based on these concerns, the Secretary has modified the language accompanying this principle. These modifications broaden the scope of the term "research-based" approach to include programs that show promise of being effective in preventing or reducing drug use or violence.

Comment: One commentator expressed concern that implementation of the SDFS Principles of Effectiveness may force rural LEAs [local educational agencies] to replace "old favorite" programs that they feel have been working for them with prevention programs that have been proven to work in other socio-economic areas--such as high-population LEAs--but may not be appropriate to their needs.

Discussion: The Department plans to provide technical assistance to help LEAs obtain information about effective, research-based programs appropriate for an LEA's demographics. The purpose of the SDFS Principles of Effectiveness is to ensure that funds available to grantees under the SDFSCA are used in the most effective way. This allows LEAs to continue "old favorite" programs if they are effective or show promise of effectiveness.

The language of "promise" in the revised guidance raises the basic question of how "research-based programs" are defined. "Promise" is not defined the same way as in the Maryland report, with at least one level 3 impact evaluation showing a positive result. No definition of "research-based programs" is found anywhere in either the Principles of Effectiveness (which have the force of administrative rules) or the accompanying "Nonregulatory Guidance." In the final published version supplementing the Federal Register announcement, one section discussed (but did not define) the meaning of programs that show promise of being effective:

Recipients that choose this approach should carefully examine the program they plan to implement to determine if it holds promise of success. Does [End Page 144] it share common components or elements with programs that have been demonstrated to be successful? Is the program clearly based on accepted research? Is there preliminary data or other information that suggest that the program shows promise of effectiveness?

If recipients decide to implement a promising program, at the end of no more than two years of implementation they must be prepared to demonstrate to the entity providing their grant that the program has been effective in preventing or reducing drug use, violence, or disruptive behavior, or in modifying behaviors or attitudes demonstrated to be precursors of drug use or violence.

This section is followed by a questions-and-answers section on how to evaluate programs, which provides this further detail:

Q53. What does "evaluate" mean?

A53. Evaluation is the systematic collection and analysis of data needed to make decisions. Periodically, recipients will need to examine the programs being implemented to determine if they are meeting established measurable goals and objectives. The nature and extent of such evaluation activities will vary, and should be selected after considering the methods that are appropriate and feasible to measure success of a particular intervention....

Q55. Must evaluation efforts include a control group?

A55. No, recipients are not required to establish a control group.

Thus a close reading of these rules suggests that research-based or promising programs can mean anything that recipients say they mean. Expressed in terms of the Maryland scale, evaluations need not be any higher than level 1 or 2. Because no clear requirement exists for an outcome measure, some recipients might even interpret this language to allow goals to be defined in terms of inputs alone--so many students attending D.A.R.E. classes, for example. By failing to define the meaning of "research-based," the department continued the basic policy of letting recipients spend the money without regard to results. But it is not clear that Congress or the White House would have allowed the department to push much further if the grass-roots protests had become loud.

In the final language of the explanatory comments accompanying the principle of research-based programs, the department tried to please both the antinomians and the mainstream evaluators simultaneously by urging each school district to conduct its own extensive review of the scientific literature: [End Page 145]

While the Secretary recognizes the importance of flexibility in addressing State and local needs, the Secretary believes that the implementation of research-based programs will significantly enhance the effectiveness of programs supported with SDFSCA funds. In selecting effective programs most responsive to their needs, grantees are encouraged to review the breadth of available research and evaluation literature, and to replicate these programs in a manner consistent with their original design.

How the Money Has Been Spent

Given the legislation and rules, the resulting expenditures were predictable. Los Angeles Times reporter Ralph Frammolino spent months learning what the Department of Education has no system for knowing: how the local education authority recipients spent the money. While neither a systematic audit nor a social scientist's coding of different categories of spending, Frammolino's research provides a level of detail that supports his basic conclusion: "Left to thrash about for any strategy that works, local officials scatter federal money in all directions and on unrelated expenses." 47

Frammolino found many examples of schools spending money on entertainment that was, in theory, supposed to inspire students to stay drug-free. The theoretical basis of that claim is not clear in his examples:

--Several months before the March 1998 murders in Jonesboro, Arkansas, the school used program funds to hire a magician.

--One Washington-based magician makes two hundred performances annually with a drug awareness theme, of which some 25 percent are paid for with program funds. The $500 show lasts forty-five minutes, during which "we might cut a girl in half and talk about drugs damaging a body."

--The 1997 Miss Louisiana gives antidrug talks paid for by program funds, in which she sings the theme song from "Titanic" and Elvis Presley's "If I Can Dream."

--A school district outside Sacramento paid $400 for a speaker who described the life of Dylan Thomas and his death from alcohol.

Other program funds are spent on the "alternative activities" that Denise Gottfredson's review found ineffective. In Los Angeles, more than $15,000 was spent on tickets to Dodgers games and $850 was spent for Disneyland passes. In Eureka, Utah, officials spent $1,000 on fishing equipment for field trips in which students go fishing with a health [End Page 146] teacher. The teacher said he thought students might learn to prefer fishing to drinking and trying drugs. In Virginia Beach, program funds paid for lifeguards and dunking booths for drug-free graduation activities.

Many dollars are spent in aid of classroom instruction, although the connection to violence and drug prevention is unclear. Hammond, Louisiana, police spent $6,500 in program funds to buy a remote-controlled, three-foot replica police car toy. In Michigan, a state audit found $1.5 million spent on full models of the human torso, $81,000 for large sets of plastic teeth and toothbrushes, and $18,500 for recordings of the "Hokey Pokey." These aids were used to teach sex education, toothbrushing, and self-esteem, respectively.

Enormous sums are spent on publications. More than half of the $8 million in Los Angeles went to buy books, including $3.3 million in character education books published by a small firm specializing in books reimbursed by the program. The books provide second to fifth graders with "lessons in character." These books are part of a program that is supposed to be taught for a minimum of twenty-four forty-minute sessions spread across the school year, teaching "pillars of character: respect, responsibility, fairness, and trustworthiness." Another $900,000 was paid for substitutes to replace 2,354 teachers who spent a day attending seminars on how to inculcate character in elementary schools, or to lead discussion groups with older "at-risk" students.

Frammolino reports that "student assistance groups" take about half the national budget for the program. In Los Angeles, 141 schools run 2,450 groups of high-risk students led by teachers given five days of training and a script. The students are pulled out of one class period per month to discuss their personal problems. They are not exposed to Life Skills Training, the research-based curriculum that San Diego school officials obtained $1 million in federal funds to implement--one of the few large districts known to have done so. 48

Estimating Return on Investment

For the program funds spent on counseling, the return on investment for that money is reliably estimated to be zero. Mark Lipsey's 1992 meta-analysis of juvenile delinquency interventions found counseling near the bottom in effectiveness, with an average effect size of -.01. Gary Gottfredson's 1987 level 3 evaluation of peer counseling groups similar to [End Page 147] the Student Assistance Program groups in Los Angeles found that they increased delinquency slightly, instead of reducing it. 49

Whatever portion of program funds is spent on alternative leisure activities is also producing zero return for prevention. The evidence shows either no effects from such programs or increases in delinquency from mixing high-risk and low-risk youth in the absence of strong pro-social norms. These "old favorites" may appear effective to the teachers who lead them, but they can hardly tell what effect the programs have on delinquency when the teachers are not around. One program of such activities led by a street gang social worker kept offending rates high until the program's funding ran out. When the activities ended, the gang's cohesion declined, and so did its members' offending rates. 50

The return from hiring magicians, singers, speakers, and other inspirational performers is unknown. It seems reasonable to dismiss these programs as a waste of money, if only because no plausible theory or indirect evidence suggests that these activities might prevent violence or drug abuse.

Estimating the return on investment in character education is a harder task. James Q. Wilson wrote after the Littleton murders that American schools were designed primarily for character education, a mission they have lost in recent years. He suggested that, if schools got serious about this task, they might be able to foster a climate less conducive to violence. However, whether mere books and lectures are enough to make character education the central mission of a school is not clear. The research on whole school organizational strategies may indicate the extent of the changes required. Nonetheless, both the character curricula and Wilson's larger hypothesis remain unevaluated. Both could be fruitful topics for increased research on school-based prevention.

The most profitable investment in the program's portfolio may be the Life Skills Training curriculum. To the extent that the curriculum is fully taught, it could be achieving up to 66 percent reductions in substance abuse. 51 Yet it is unknown how much program funding is allocated to LST. Far more money appears to be spent on the police-taught D.A.R.E. program, which has shown meaningful prevention effects in none of its evaluations to date. (The reason Botvin's LST program is much less widely used than D.A.R.E. is that he lacks an advocacy organization comparable to D.A.R.E.'s national corporation; Botvin's program is also taught by teachers, who have much less visual and telegenic appeal than uniformed police officers in classrooms.) [End Page 148]

It is difficult to demonstrate much if any clear prevention benefit from the half-billion dollars or more per year in federal funding for the program. Even in the face of congressional concern, the program remains alive because of its political benefit. As Representative George Miller (D-Calif.) told Frammolino, "Every elected official wants these programs in their district. Once you succumb to that pressure, you're just dealing with a political program. You're not dealing with drug prevention or violence prevention." Or as Representative Peter Hoekstra (R-Mich.) observed, "Most of the numbers on Safe and Drug-Free Schools will tell you that the federal program has failed miserably."

Marrying Grass Roots with Evidence

Can this program be saved? Can it do something useful, rather than squandering tax dollars indefinitely? Answering that question requires a more general perspective on the new federalism and evidence-based accountability. It also requires acknowledging the limitations of divided government, with different parties in control of the executive and legislative branches. As long as government is divided, Congress has no incentive to grant strong powers to the executive branch to improve results. Success by the administration may be good for the country, but bad for party politics. This was true when the program started in 1986 with a Democratic Congress and a Republican president and remains true in 1999 with a Republican Congress and a Democratic president.

Many have argued that the program should simply be eliminated. That response is not useful, given the strong political forces keeping the program alive. In the wake of the 1999 school murders, the program is less likely than ever to be eliminated, no matter how much evidence may support that conclusion. The only useful question is how the program might be modified, within the political constraints that Congress and the president perceive, to make schools even safer and more drug-free than they already are.

There are at least four possible approaches to modifying the program within its political constraints. One is the FDA model of federally approved programs. A second is the agricultural extension agent model for applying national knowledge. A third is the Government Performance and Results Act model for local accounting indicators. But only the fourth [End Page 149] alternative, evidence-based government, seems likely to combine knowledge and democracy for good results.

The FDA Model

One alternative is more detailed legislation on programs eligible for federal funds. Congress may not want the administration deciding what programs are eligible, which could work to the administration's political advantage. But Congress could conduct its own review of the literature or delegate that task to the National Research Council of the National Academy of Sciences. That review could develop detailed blueprints for a selection of prevention methods that research has found to be effective. Congress could use its report to enact a legislated list of eligible prevention methods, just as the Food and Drug Administration certifies drugs found to be safe and effective for public distribution. Congress could also develop a list of methods found to be ineffective, which could be barred from federal funding. Room for innovations could be created by reserving 20 percent of funding for previously untested programs, along with requirements that the funded innovations be rigorously evaluated (level 3 or higher) by an independent research organization or university.

The major problem with this approach is that the available research lacks legitimacy across a wide spectrum of grass-roots leadership. The most commonly heard example of this problem is the statement that "D.A.R.E. may not work in the places it was studied, but we know it is working in our local schools." Even if one accepts all the available research that supports lists of effective and ineffective programs, those lists are very short. Much of what the program spends money on has never been evaluated, at least not in the precise form that each locality employs. The lack of knowledge about locally popular programs further reduces the legitimacy of the FDA approach at the grass-roots level.

The Agricultural Extension Agent Model

A more democratic approach would simply put the available knowledge into the hands of grass-roots leaders, using the agricultural extension agent model. Since 1919, the U.S. Department of Agriculture has shared with the states the cost of hiring university employees to provide evidence-based farming advice directly to farmers. Congress does not [End Page 150] have to review any literature to do this. It merely pays for an ongoing flow of data between universities and farms and back again. This partnership has helped make America the breadbasket of the world. The subject matter of farming may not be as contentious as that of running schools, but controversy may not be the key variable. In agriculture and education alike, the key to success may be a close personal relationship between a university extension agent and a local decisionmaker.

If school districts could rely on educational extension agents to provide them with free technical assistance on how to spend their program funds, that might move the schools voluntarily toward adopting the same list of proven programs that an FDA model might develop. The problem with Washington bureaucrats, or even researchers in other cities, is that they are faceless and impersonal. The virtue of extension agents giving advice is that they become well-known personalities, long-term colleagues in the same community. Even medical doctors ignore research evidence when it comes to them in the form of publications and resent its bureaucratic imposition in the form of managed care reimbursement rules. As a RAND study discovered, doctors usually change their practices only when another doctor they know and respect persuades them to--not when they read a new study in the New England Journal of Medicine. 52

Commentaries on the extension agent model have suggested that the agents would not necessarily stick to the evidence. Some commentators indict schools of education as often indifferent to research evidence and prone to pushing the latest fads. Social integration of extension agents with the grass-roots leadership could put pressure on the agents to find research that "justifies" the decisions local leaders make, rather than objectively informing those decisions. And without a basis for increasing the availability of strong evaluation research, these agents of applied research would have too little research to apply.

The GPRA Model

A third alternative is to require that each school receiving federal funds account for the results of those funds using the federal Government Performance and Results Act standards. This model assumes that localities cannot do their own controlled field tests of prevention programs, but they can at least document trends over time in crime and drug abuse associated with those programs. Given the secret nature of much crime and delinquency, [End Page 151] this would require schools to administer annual student surveys of self-reported victimization, offending, and drug abuse. Such instruments are readily available and would only require local competence in the administration and interpretation of such tests. Properly employed, these surveys could identify schools that showed greater or lesser success over time in preventing these problems, just as a certified public accountant statement shows how profit levels change over time in publicly held corporations.

The main limitation of this model is resources. As a recent RAND study suggests, local school leaders are unable to add such evaluations to their job descriptions. They already have far too many duties to start performing survey research. Their interpretation of the Safe and Drug-Free Schools and Communities program is that it asks them to provide evaluations without paying for the work. 53 This problem is summarized elsewhere as "There's no such thing as free evidence." 54 Even more important, however, is the scientific limitation of the GPRA local accounting model. Without explicit field tests employing control groups, the causal link between programs and trends in "outcomes" will remain tenuous. And even with a local evaluation staff assigned to such accounting tasks on a full-time basis, the common failure to adjust for student background characteristics can make trend analysis a poor indicator of how successful schools are with the cards they are dealt.

The importance of controlling for student background characteristics cannot be overestimated. It is shocking that the national standards reform movement has failed to identify this issue, thus allowing schools serving wealthy communities to look more successful than schools serving poorer communities. A strong link exists between the social capital of the community and the overall success of the school. To account for the value that schools add to their students beyond what students acquire at home, all performance measures should control for the expected rates of student failure or success. This is as true of drug abuse as it is of Scholastic Assessment Test (SAT) scores. Parental educational levels, parental employment rates and income, prevalence of two-parent homes, and other factors can be measured in student surveys. But proper statistical controls for these background factors require competent data analysts in each state and large school district, people whose full-time job is the collection and interpretation of valid local accounting data. Merely requiring schools to produce GPRA-style trend data comes nowhere near meeting these needs. [End Page 152]

Evidence-Based Democracy

The fourth, and arguably best, approach to reforming the Safe and Drug-Free Schools and Communities program is to combine national knowledge, risk-adjusted local GPRA accounting, and grass-roots democracy. This approach starts with a participatory planning process of using data to hold programs accountable, one that includes representatives of schoolteachers, administrators, parents, school board members, students, and local taxpayers. Such a group can be called a planning committee, an oversight group, or a results task force. It could be operated at the margins of a school system, or it could be integrated into the ongoing supervision of the schools. It could meet in private, or it could hold annual public sessions in which each school principal is asked to account for the data assembled by the school district's own performance data analyst. Such high-visibility sessions could be as successful as the New York City Police "compstat" process or hospital "grand rounds" in focusing the organization on its outcomes, whether the outcomes measure crime and drug abuse or standardized achievement test scores. 55

No matter how each local education authority chooses to use the data, the key element of this approach is to put the right kind of data in its hands. These data should include both the latest, most complete results of national research, as well as highly refined local trend data. Such data should look not only at measures of crime and drug abuse, but also at measures of program implementation and fidelity. School climate measures from annual surveys of students and teachers would be a critical component of the local accounting data, given the strong relationship in the literature between school organizational climate and all school-specific results. The measurement of educational practices allows local conclusions to be drawn about cause and effect of different programs and practices. The measurement of school-specific--and possibly even teacher-specific or class-specific--results net of the student background characteristics helps identify true success or failure. And this process has proven successful at the hospital level in diagnosing high failure rates and improving results. 56

IMAGE LINK= The use of these data would therefore constitute an iterative process of the kind proposed by W. Edwards Deming. Figure 1 shows how the process would draw on both national and local evidence on the relationship between practices and outcomes. The model is as applicable to drug [End Page 153] abuse rates as it is to SAT scores. Its success depends largely on the quality of the data, and the quality of the leadership using the data to improve performance.

The major limitation of this approach is cost. The approach may not be feasible in the majority of school districts that are too small to support a performance data analyst, although state agencies could provide local schools with a time-shared performance analysis service. Even in the larger districts, the approach requires new data collection and new knowledge rarely found in-house. It is striking that the U.S. Department of Education would put into print the suggestion that "grantees are encouraged to review the breadth of available research and evaluation literature, and to replicate these programs in a manner consistent with their original design." It is unclear how many local education authorities have staff with the time and the training needed to do all that properly. However, Ph.D.-level "performance accountants" could devote their entire careers to reviewing and explaining the results of ongoing school-based prevention and performance program evaluations. School-based prevention research is a highly specialized field that few university professors are equipped to discuss. Only this kind of infrastructure can provide the requisite level of expertise in comprehending, communicating, and applying the research.

If Congress wants to see local prevention programs based on sound knowledge, perhaps the best way to achieve that goal is to pay for it. Changing the program to earmark an additional 10 percent of funds for performance data accounting would put some $50 million into this function [End Page 154] nationwide. But without such an appropriation, this approach is not likely to be implemented.

Implications for Reinventing Government and GPRA

As long as the Education Department is prohibited by law from "exercising any direction, supervision, or control over the curriculum or program of instruction of any school or school system," it makes little sense to hold the department accountable for results. 57 That is one example of the major flaw in GPRA in particular and in federal accountability in general in an era of grass-roots control. But the department could be empowered to measure results across local education authorities and use those measures for further pressure to adopt evidence-based government. That measurement could also be part of an amended Safe and Drug-Free Schools program. A university extension service associated with each school district could be charged with collecting standardized data from local police or other sources, all of which would be sent to Washington for analysis.

The method of outcome assessment is an important issue for making GPRA work. Perhaps the best way to ensure that performance measures are adjusted for student background characteristics is to require each state to produce a ranking of schools or school systems on standardized, background-controlled results. This ranking could allow schools in high-poverty areas to show more value-added results than schools in wealthy suburban areas. Aggregating such measures by state and nation could even allow the federal Safe and Drug-Free Schools and Community program to produce its own meaningful GPRA indicators for the nation's 14,881 school districts. The annual standing of each district (probably within category of urban, suburban, and rural schools) could become a point of pride in fostering an evidence-based culture of results guiding democracy at the grass roots.

Such rankings could also become a source of cheating. The more important evidence becomes, the more likely it is to be subverted. Thus, not surprisingly, a Texas school official was recently indicted for falsifying school test scores. That is one more reason to have a performance analyst symbolically clothed in the mantle of accounting, rather than science. Studies as such can always be discounted as irrelevant, but profit-and-loss statements carry overtones of serious business. [End Page 155]

Conclusion

Other ways may exist to make the Safe and Drug-Free Schools program less wasteful and more useful. No matter how it is done, the fundamental challenge remains the same: marrying grass-roots federalism to a culture of evidence-based government. This marriage may be one of the few ways to overcome symbolic pork. It provides no solution to the mismatching of resources and risk. But it does begin to do what President Clinton proposed for federal education funding in his 1999 State of the Union address--to "change the way we invest that money, to support what works and to stop supporting what doesn't." 58

Notes

The comments of William Galston, Ramon Cortines, David Kirp, Diane Ravitch, and Michelle Cahill helped shape this paper's final form. The development of the paper was supported by my colleagues in the Crime Prevention Effectiveness Program at the University of Maryland, Department of Criminology and Criminal Justice, including Michael Buckley, Denise Gottfredson, Doris MacKenzie, John Eck, Shawn Bushway, Peter Reuter, and Mary West, as well as by the program's patrons, Jerry Lee, Robert Byers, George Pine, and a private foundation.

1. Digby Baltzell, Puritan Boston and Quaker Philadelphia (New York: Free Press, 1979).

2. Gary Orren, "Fall from Grace: The Public's Loss of Faith in Government," in Joseph F. Nye, Philip D. Zelikow, and David C. King, eds., Why People Don't Trust Government (Harvard University Press, 1997), pp. 81, 83.

3. Lawrence W. Sherman, Evidence-Based Policing, Second Ideas in American Policing Lecture (Washington: Police Foundation, 1998).

4. David Osborne and Ted Gaebler, Reinventing Government: How the Entrepreneurial Spirit Is Transforming the Public Sector (Penguin, 1993).

5. During those two years, sixty-three students aged five though nineteen were murdered. The denominator for these murders was about 50 million students in school for about 6 hours per day (after adjusting for absenteeism) for about 200 days per year X 2 years = 2,400 hours per student. Because one full person-year = 365 days X 24 hours = 8,760 hours, each student represented an estimated 0.274 person-years (2,400/8,760 = 0.274) X 50 million = 13,698,630 person-years. The rate of murder was therefore 0.45 per 100,000 person-years. This overstates the rate by about 50 percent, because almost all murders are committed against people who are awake at the time, and the person-year calculation assumes that people never go to sleep. If people sleep about one-third of each day, the corrected rate of homicide is only 0.30 per 100,000. These calculations were derived from raw data in Annual Report on School Safety, 1998 (Department of Justice and Department of Education, 1999), p. 9.

6. Federal Bureau of Investigation, Crime in the United States: The Uniform Crime Report, 1992, 1993, 1994 (Department of Justice).

7. The obverse of the calculation of the school-hours denominator is that the 50 million students spent 0.726 of their time (1 - 0.274 = 0.726) = 36,300,000 person-years out of school. During those years, (7,357 - 63 =) 7,294 children aged five to nineteen were murdered out of school, for a rate of 20.09 per 100,000. Annual Report on School Safety, p. 9.

8. Heather Strang, Homicide in Australia, 1990-91 (Canberra, Australia: Australian Institute of Criminology, 1993).

9. See William J. Wilson, When Work Disappears (Alfred A. Knopf, 1996); and Douglas S. Massey and Nancy A. Denton, American Apartheid: Segregation and the Making of the Underclass (Harvard University Press, 1993).

10. Lawrence W. Sherman and Dennis P. Rogan, "Effects of Gun Seizures on Gun Violence: Hot Spots Patrol in Kansas City," Justice Quarterly, vol. 12 (1995), pp. 673-93, especially p. 679.

11. Annual Report on School Safety, p. 11.

12. Annual Report on School Safety, p. 10.

13. Federal Bureau of Investigation, Crime in the United States: The Uniform Crime Reports (Department of Justice, annual).

14. Kathleen Maguire and Ann Pastore, eds., Sourcebook of Criminal Justice Statistics, 1997 (Department of Justice, 1998), pp. 178-81.

15. James Q. Wilson and Richard Herrnstein, Crime and Human Nature (Simon and Schuster, 1985).

16. Sheryl Gay Stolberg, "By the Numbers: Science Looks at Littleton, and Shrugs," New York Times, May 9, 1999, section 4, p. 1.

17. Maguire and Pastore, Sourcebook of Criminal Justice Statistics, p. 234.

18. Gary D. Gottfredson, Exploration of Adolescent Drug Involvement: Report to the National Institute of Juvenile Justice and Delinquency, U.S. Department of Justice, grant 87-JN-CX-0015 (Johns Hopkins University, Center for the Social Organization of Schools, 1988).

19. Denise Gottfredson, Schools and Delinquency (Cambridge University Press, forthcoming), chapter 3.

20. Denise C. Gottfredson, testimony before the Senate Committee on Health, Education, Labor, and Pensions, May 6, 1999.

21. Gary D. Gottfredson and Denise C. Gottfredson, Victimization in Schools (New York: Plenum, 1985).

22. Gottfredson, Schools and Delinquency, chapter 3.

23. Robert Dahl, On Democracy (Yale University Press, 1998).

24. Lawrence W. Sherman and others, Preventing Crime: What Works, What Doesn't, What's Promising (Department of Justice, 1997), at www.preventingcrime.org.

25. Carol H. Weiss, Evaluation Research: Methods of Assessing Program Effectiveness (Englewood Cliffs, N.J.: Prentice-Hall, 1972).

26. Lisbeth B. Schorr, Common Purpose: Strengthening Families and Neighborhoods to Rebuild America (New York: Anchor Books, 1997).

27. Tom Loveless, "The Use and Misuse of Research in Educational Reform," in Diane Ravitch, ed., Brookings Papers on Education Policy, 1998 (Brookings, 1998), pp. 279-317.

28. Sherman, Evidence-Based Policing.

29. Michael Millenson, Demanding Medical Excellence (University of Chicago Press, 1997).

30. Sherman and others, Preventing Crime.

31. H. Rept. 104-378, 104th Cong., 1 sess., Section 116.

32. Denise C. Gottsfredson, "An Empirical Test of School-Based Environmental and Individual Interventions to Reduce the Risk of Delinquent Behavior," Criminology, vol. 24 (1986), pp. 705-31; Denise C. Gottfredson, "An Evaluation of an Organization Development Approach to Reducing School Disaster," Evaluation Review, vol. 11 (1987), pp. 739-63; and D. J. Kenney and T. S. Wilson, "Reducing Fear in the Schools: Managing Conflicts through Student Problem Solving," Education and Urban Society, vol. 28 (1996), pp. 436-55.

33. G. R. Mayer and others, "Preventing School Vandalism and Improving Discipline: A Three-Year Study," Journal of Applied Behavior Analysis, vol. 16 (1983), pp. 355-69; D. Olweus, "Bully/Victim Problems among Schoolchildren: Basic Facts and Effects of a School Based Intervention Program," in D. J. Pepler and K. H. Rubin, eds., The Development and Treatment of Childhood Aggression (Hillsdale, N.J.: Lawrence Erlbaum Publishers, 1991); and D. T. Olweus, "Bullying among Schoolchildren: Intervention and Prevention," in R. D. Peters, R. J. McMahon, and V. L. Quinsey, eds., Aggression and Violence throughout the Life Span (Newbury Park, Calif.: Sage, 1992).

34. G. J. Botvin and others, "A Cognitive Behavioral Approach to Substance Abuse Prevention," Addictive Behaviors, vol. 9 (1984), pp. 137-47; R. Weissberg and M. Z. Caplan, "Promoting Social Competence and Preventing Anti-Social Behavior in Young Adolescents," University of Illinois, Department of Psychology, 1994; Institute of Medicine, Reducing Risks for Mental Disorders: Frontiers for Preventive Intervention Research (Washington: National Academy Press, 1994); and W. B. Hansen and J. W. Graham, "Preventing Alcohol, Marijuana, and Cigarette Use among Adolescents: Peer Pressure Resistance Training versus Establishing Conservative Norms," Preventive Medicine, vol. 20 (1991), pp. 414-30.

35. J. E. Lochman and others, "Treatment and Generalization Effects of Cognitive-Behavioral and Goal-Setting Interventions with Aggressive Boys," Journal of Consulting and Clinical Psychology, vol. 52 (1984), pp. 915-16; B. H. Bry, "Reducing the Incidence of Adolescent Problems through Preventive Intervention: One- and Five-Year Follow-Up," American Journal of Community Psychology, vol. 10 (1982), pp. 265-76; and Mark Lipset, "Juvenile Delinquency Treatment: A Meta-Analytic Inquiry into the Variability of Effects," in T. Cook and others, eds., Meta-Analysis for Explanation: A Casebook (New York: Russell Sage Foundation, 1992).

36. Gottfredson, "An Empirical Test of School-Based Environmental and Individual Interventions to Reduce the Risk of Delinquent Behavior"; and Gary Gottfredson, "Peer Group Interventions to Reduce the Risk of Delinquent Behavior: A Selective Review and a New Evaluation," Criminology, vol. 25 (1987), pp. 671-714.

37. C. Ringwalt and others, Past and Future Decisions of the D.A.R.E. Program: An Evaluation Review, Draft Final Report, award 91-DD-CX-K053 (Washington: National Institute of Justice, 1994); D. P. Rosenbaum and others, "Cops in the Classroom: A Longitudinal Evaluation of Drug Abuse Resistance Education (DARE)," Journal of Research in Crime and Delinquency, vol. 31 (1994), pp. 3-31; and R. R. Clayton, A. M. Cattarello, and B. M. Johnstone, "The Effectiveness of Drug Abuse Resistance Education (Project DARE): Five-Year Follow-Up Results," Preventive Medicine, vol. 25 (1996), pp. 307-18.

38. G. Botvin, "Substance Abuse Prevention: Theory, Practice, and Effectiveness," in M. Tonry and J. Q. Wilson, eds., Crime and Justice, vol. 13: Drugs and Crime (University of Chicago Press, 1990).

39. Botvin and others, "A Cognitive Behavioral Approach to Substance Abuse Prevention"; W. B. Hansen, "School-Based Substance Abuse Prevention: A Review of the State of the Art Curriculum 1980-1990," Health Education Research, vol. 7 (1992), pp. 403-30; J. G. Ross and others, "The Effectiveness of an After-School Program for Primary Grade Latchkey Students on Precursors of Substance Abuse," Journal of Community Psychology, OSAP special issue (1992), pp. 22-38; M. Stoil, G. Hill, and P. J. Brounstein, "The Seven Core Strategies for ATOD Prevention: Findings of the National Structure Evaluation of What Is Working Well Where," paper presented at the Twelfth Annual Meeting of the American Public Health Association, Washington, D.C., 1994; and An Evaluation of a School-Based Community Service Program: The Effects of Magic Me, technical report, available from Gottfredson Associates Inc., Ellicott City, Md.

40. Denise C. Gottfredson, "Changing School Structures to Benefit High-Risk Youth," in P. E. Leone, ed., Understanding Troubled and Troubling Youth (Newbury Park, Calif.: Sage Publications, 1990).

41. B. H. Bry, "Reducing the Incidence of Adolescent Problems through Preventive Intervention."

42. Botvin, "Substance Abuse Prevention"; and Gottfredson, testimony before Senate Committee on Health, Education, Labor, and Pensions.

43. Sections 4111-4116, 20 U.S.C. 7111-7116.

44. Ralph Frammolino, "Failing Grade for Safe Schools Plan," Los Angeles Times, September 6, 1998, p. 1.

45. Federal Register, July 16, 1997 (62 FR 38072).

46. Federal Register, vol. 63, no. 104, June 1, 1998, p. 29902.

47. Frammolino, "Failing Grade for Safe Schools Plan."

48. Botuin, "Substance Abuse Prevention."

49. Denise C. Gottfredson, "School-Based Crime Prevention," in Lawrence W. Sherman and others, Preventing Crime: What Works, What Doesn't, What's Promising (Department of Justice, 1997), pp. 5-46, 47, at www.preventingcrime.org.

50. Malcolm Klein, Street Gangs and Street Gang Workers (Englewood Cliffs, N.J.: Prentice-Hall, 1971).

51. Frammolino, "Failing Grade for Safe Schools Plan."

52. As cited in Millenson, Demanding Medical Excellence.

53. Melissa A. Bradley, Michael Timpane, and Peter Reuter, "Focus Groups: Safe and Drug-Free Schools Program Conference Draft," RAND Corporation Aspen Institute Wye River Conference on Prevention of Drug Abuse and Violence among School Children, July 21-23, 1999.

54. Lawrence W. Sherman, "Ten Principles of Evidence-Based Government," University of Pennsylvania, Fels Center of Government, 1999.

55. Sherman, Evidence-Based Policing.

56. Millenson, Demanding Medical Excellence.

57. Department of Education, Safe and Drug-Free Schools and Communities Act, State Grants for Drug and Violence Prevention Program, Nonregulatory Guidance for Implementing SDFSCA Principles of Effectiveness, May 1998.

58. New York Times, January 20, 1999, p. A22, Washington edition.

59. Nance F. Sizer and Theodore Sizer, eds., Moral Education: Five Lectures (Harvard University Press, 1970), p. 5.

60. Matthew Rees, "Title IV: Neither Safe Nor Drug-Free," in Marci Kanstoroom and Chester E. Finn Jr., eds., New Directions: Federal Education Policy in the Twenty-First Century (Washington: Thomas B. Fordham Foundation, 1999), p. 83.

61. Rees, "Title IV," p. 79.

62. Carlotta C. Joyner, Safe and Drug Free Schools: Balancing Accountability with State and Local Flexibility (General Accounting Office, 1997).

63. Richard W. Riley, "Safe in Our Schools," Washington Post, August 15, 1999, p. B7. See also Phillip Kaufman and others, "Indicators of School Crime and Safety, 1998," Education Statistics Quarterly, vol. 1, no. 1 (Spring 1999), pp. 42-45; and Edward Walsh, "Fewer Students Were Expelled for Firearms, U.S. Reports," Washington Post, August 11, 1999, p. A3.

64. Similar conclusions are presented in Paul E. Baron and Richard J. Coley, Order in the Classroom: Violence, Discipline, and Student Achievement (Princeton, N.J.: Educational Testing Service, 1998). They used data from the Department of Education and the Department of Justice.

65. Barton and Coley, in Order in the Classroom, said, "The strongest predictor of school crime was the nature of the surrounding community." Quoted matter on p. 8; see also pp. 11-19.

66. For an excellent summary of this line of inquiry, see Marshall S. Smith and Stewart C. Purkey, "Effective Schools: A Review," Elementary School Journal, vol. 83 (March 1983), pp. 427-52.

67. Paul Hill and Mary Beth Celio, Fixing Urban Schools (Brookings, 1998), pp. 31-38.

68. Maris A. Vinovskis, "Improving Federal Educational Research, Development, and Evaluation," testimony presented at a Joint Hearing before the House Committee on Education and the Workforce and the Senate Committee on Health, Education, Labor, and Pensions, June 17, 1999, p. 5.

69. Maris A. Vinovskis, Changing Federal Strategies for Supporting Educational Research, Development, and Statistics (Department of Education, 1998), p. 70.

70. Vinovskis, Changing Federal Strategies for Supporting Educational Research, Development, and Statistics, p. 69. See also Vinovskis, "Improving Federal Educational Research, Development, and Evaluation," p. 6; and Maris A. Vinovskis, "Missing in Practice?: Systematic Development and Rigorous Program Valuation at the U.S. Department of Education," paper prepared for the Conference on Evaluation of Educational Policies, American Academy of Arts and Sciences, May 13-14, 1999.

71. D. W. Miller, "The Black Hole of Education Research," Chronicle of Higher Education, August 6, 1999, p. 1.

72. E. Suyapa Silvia and Judy Thorne, School-Based Drug Prevention Programs: A Longitudinal Study in Selected District (North Carolina: Research Triangle Institute, 1997), p. E23.

73. National Center for Neighborhood Enterprise, Violence-Free Zone Initiative (Washington, 1999).

Previous Article

Comment

Next Article

Comment

Share