In lieu of an abstract, here is a brief excerpt of the content:

C H A P T E R F O U R Cures That Harm Unanticipated Outcomes of Crime Prevention Programs* T HE NEW YORK TIMES published an article on Thursday, 4 April 2002 announcing that “a trade group representing British pharmaceutical companies publicly reprimanded Pfizer for promoting several medicines for unapproved uses and marketing another drug before it received government approval” (p. C5). The reprimand was justified because the drugs had not been appropriately tested for safety. Pfizer risked causing harm. No such reprimand could possibly occur in the fields of social intervention. Researchers, practitioners, and policy makers have begun to understand that evidence is required to identify effective programs to reduce crime. Yet they typically couple the desire for evidence with an inappropriately narrow focus. They ask, Does the program work or not? This question is too narrow because it fails to recognize that some treatments cause harm. Intervention programs may, for example, increase crime or the use of drugs. They may decrease the punitive impact of sanctions available to the criminal justice system. They may, perhaps, result in reductions in the ability to cope with life—or even in premature death. Unless social programs are evaluated for potential harm as well as benefit, safety as well as efficacy, the choice of which social programs to use will remain a dangerous guess. No public reservoir of data permits evaluating whether a given type of program meets even minimum requirements to provide benefits and avoid harm either to recipients of the social programs or to the communities from which they come. Yet social harm is costly to the public, perhaps even more costly than physical harm. Reluctance to recognize that good intentions can result in harm can be found in biased investigating and reporting. Many investigators fail to ask whether an *Reprinted from McCord, J. 2003. Cures that harm: Unanticipated outcomes of crime prevention programs. In Annals of the American Academy of Political Science 587: 16–30; with permission from Sage Publications and Corwin Press. intervention has had adverse effects, and many research summaries lack systematic reporting of such effects (Sherman et al., 1997). What has been called a publication bias appears when analyses show that a higher proportion of studies that reinforce popular opinions than those that do not get into peer-reviewed journals (Dickersin and Min, 1994; Easterbrook et al., 1991; Scherer, Dickersin, and Langenberg, 1994). In summarizing the results of studies evaluating publication bias, Colin Begg (1994) reported that “most studies of the issue have consistently demonstrated that positive (statistically significant ) studies are more likely to be published” (p. 401). One reason for what appears to be a code of silence about adverse effects is fear that all social programs will be tainted by the ones that are harmful. That fear, perhaps justified in some quarters, would be like blocking publication of potentially damaging effects of Celebrex, thalidomide, or estrogen because the publication could slow experimental work in disease prevention. Social programs deserve to be treated as serious attempts at intervention, with possibly toxic effects, so that a science of intervention can prosper. What follows is a discussion of some social programs that have been carefully evaluated using experimental designs with random assignment to a treatment and a comparison group. They have been found to have harmful effects, and for this reason, they are important experiments. Knowledge that well-designed, carefully implemented social programs can produce unwanted results should set a solid foundation for insisting that all social programs should be coupled with evaluations that have scientific credibility. The Cambridge-Somerville Youth Study The Cambridge-Somerville Youth Study was a carefully designed, adequately funded, and well-executed intervention program. Furthermore, a scientifically credible research design played a central role in its construction. Richard Clark Cabot funded, designed, and, until his death, directed the Cambridge-Somerville Youth Study. As a professor of clinical medicine and social ethics at Harvard, Cabot had made a mark in medicine by showing how to differentiate typhoid fever from malaria. His etiological study of heart disease was widely recognized as an important contribution to the field. He had introduced social services to Massachusetts General Hospital and had been president of the National Conference on Social Work. Not surprisingly, in turning to the problem of crime, Cabot insisted on using a scientific approach, one that aimed to alleviate the probable causes of crime but also one that would permit adequate tests of the results of...

Share