At first sight the development of cancer control programs in Europe and in North America might seem to follow similar trajectories: on both continents they emerged in the late nineteenth and early twentieth centuries, and "early detection and treatment" were generally the cornerstone of policy.1 From this perspective, control was most likely to succeed if [End Page 1] medical interventions began as early as possible in the development of the disease—or of a precursor to the disease—when, doctors2 believed, the chance of successful treatment was greatest. Thus the key tasks of control programs were to identify the disease or the risk of the disease at the earliest possible stage; to get patients to their doctors as soon as the disease, or the possibility of disease, was identified; and to ensure their early treatment by experts using a recognized means of treatment—generally surgery, radiotherapy, chemotherapy, or some combination thereof.
The term "control" was carefully chosen. Until recently programs did not emphasize the elimination or eradication of the disease, nor of the suffering and death it caused, at least in the short term.3 For most of the century, when mortality seemed to rise relentlessly, the assumption was that the disease and the risk of the disease would not go away, at least in the foreseeable future. Individual patients might be cured, but there was always the chance of recurrence. Mortality and incidence might eventually decline, but the disease or the risk of the disease would always be present in the population. It would always be in need of management or control. Thus, despite various "wars," "campaigns," and "crusades" to "conquer" the disease, the best that anticancer programs generally offered was the possibility of effective intervention if a cancer—or a precancerous condition—established itself in the body and was discovered early. To this end, they sought not only to control the disease therapeutically, but also to reform the behaviors, individuals, organizations, and social structures that encouraged delay.4
One of the standard stories goes that—with perhaps the exception of Nazi Germany5 —"early detection and treatment" dominated control programs until the 1960s and 1970s, when they were challenged by a growing interest in cancer prevention.6 In this account, attention broadened from [End Page 2] the treatment of cancers at an early stage in their development to include the prevention of the disease before it started. The roots of this interest in cancer prevention are usually traced to Anglo-American work in the 1940s that identified smoking as a cause of cancer, which by the 1960s and 1970s widened to a range of other putative causes of cancer associated with occupation, environment, and lifestyle. The story traces the difficult birth of cancer prevention during this period, and of attempts by the state to identify and regulate carcinogenic substances.
The papers in this collection suggest that such a tale will have to be revised. Focusing primarily on Britain and the United States, the authors tell stories not of similar trajectories, but of a diversity of approaches to and meanings of control. In the first place, they suggest that the first phase—early detection and treatment—was characterized by many different approaches, including public education and the organization of cancer therapy. In Britain and America cancer agencies agreed that it was essential that people should seek medical attention as early as possible. They also agreed that expert surgery, radiotherapy, and chemotherapy were the only effective treatments. But they differed over how to get the public to go to the doctor, the role of public education, how cancer services should be delivered, who should provide them, and what forms of therapy were most appropriate to particular cancers.
In the second place, these papers also highlight a diversity of approaches to prevention in the twentieth century.7 The standard account is a tale of the difficult birth of...