In lieu of an abstract, here is a brief excerpt of the content:

54 CHAPTER FOUR The Campaign for Mandatory Testing F F F Universal newborn screening programs are an anomaly in the United States, where medical practice has historically focused on the diagnosis and treatment of individual patients by their private physician, with the role of government generally restricted to regulation and payment. Although state hospitals and public health departments have long played a role in controlling infectious diseases, even most immunization programs have traditionally been implemented in the context of the private physician’s office. Organized medicine has resisted attempts of government bodies to provide direct medical services to the general public, and government programs have focused on specific populations such as military veterans or the very poor.1 The initiation of universal newborn screening (NBS) programs, in contrast, required that state governments invest in detection of specific medical conditions with relatively low incidence and no threat of contagion. What trends in medicine and social and government policy made such programs plausible? THE TURN TO SCIENTIFIC PREVENTION Preventing disease through diet or medicine was established as an ambition at least as long ago as 400 BCE, when Hippocrates The Campaign for Mandatory Testing 55 began giving advice on personal habits such as diet and exercise.2 Instruction on healthy living continued to be a common theme as modern medicine emerged in the eighteenth century. By the late nineteenth century, prevention of infant mortality was a central focus of medical and political leaders, with efforts focused on infant feeding and social conditions.3 Specific medical interventions designed to prevent particular conditions became more common in the early twentieth century as physicians applied a scientific understanding of disease etiology. Leading advocates of preventive medicine urged universal application of such interventions as cod liver oil to prevent rickets, antibiotic ointment to prevent neonatal eye infections, and vitamin K to prevent hemorrhagic disease of the newborn.4 By the mid-twentieth century, pediatric clinicians were convinced of the utility of universal treatment to prevent childhood diseases, and standard clinical protocols were common for newborns in US hospitals. Recognition of the utility of systematic prevention is one thread that led to NBS programs; faith in scientific medicine was even more critical. The general public had been fascinated by the power of science since hearing popular stories of germ-fighting scientists and physicians of the late nineteenth century, but demonstration of the value of germ theory and other scientific discoveries was still lacking through the early twentieth century. In 1910, the infant mortality rate was still well over 100 deaths per 1,000 live births in most US cities, and nearly every family knew the tragedy of childhood death.5 Highly publicized vaccination programs seemed to have little effect, and few new treatments provided the dramatic cures promised by the growth of laboratory science. By mid-century, however, public faith in the value of medical science was justified by the widespread use of penicillin in the 1940s and of the Salk and Sabin vaccines in the mid-1950s.6 The polio vaccines, in particular, were critical to the public appreciation of the power of the laboratory to prevent disease and improve health. Many adults have vivid memories of polio, recalling how families lived in fear of summer epidemics when communities across the United States closed swimming pools and quaran- [18.116.36.221] Project MUSE (2024-04-26 06:44 GMT) 56 The PKU Paradox tined the ill, with the hope that what started as mild viral illness would not become a local epidemic of death and disability. By the late 1950s, polio had receded from the American landscape.7 While the polio vaccines confirmed the power of science to prevent disease, the identification and treatment of congenital syphilis solidified the idea of universal perinatal screening. As early as 1916, obstetrician J. Whitridge Williams required all women at his prenatal clinic at Johns Hopkins Hospital to receive a routine Wassermann test for syphilis, because, he asserted, early treatment could prevent syphilis in the infant. In the mid-1930s, public health officials noted that approximately sixty thousand infants were born with congenital syphilis in the United States. To prevent infant deaths and morbidity due to this disease, in 1938, legislatures in New York and Rhode Island issued regulations requiring blood tests in all pregnant women to prevent congenital transmission of syphilis. Similar laws across the country soon followed and seemed to be effective, even before the...

Share