publisher colophon

CHAPTER ONE

“Laws of Life”
Developing Youth in Antebellum America

In the entry for “nubile” in the 1854 edition of his Medical Lexicon, Robley Dunglison cautioned fellow physicians against viewing puberty as a sudden transition to maturity for girls or boys. “Generally, the period of puberty is considered to be the age at which both sexes are nubile,” he explained. “They are truly nubile, however, only when they are fitted to procreate healthy and vigorous children, and are competent to discharge their duties as parents.”1 Dunglison’s definition heightened awareness about what he and other doctors were beginning to notice as a decline in the age of puberty. Although lacking a body of empirical data to support their claims, the observations of these physicians have proved to be correct. In 1780, white middle-class girls reached puberty at about age 17, with boys arriving one year later. Historians estimate that because of rising prosperity, improved nutrition, and lower threats of infectious disease, menarche declined to about age 14 by 1900; boys continued to lag behind girls.2 For Dunglison and his medical colleagues, this decline threatened social and economic imperatives for later marriages among the middle class, as well as middle-class expectations for male sexual control and female sexual purity. To allay such threats, doctors conceptualized an “age of puberty” (extending up to age 25), along with organic developmental requirements to govern it, that deemed a prolonged period of innocent youth to be mandated by the laws of nature.

Contrary to Dunglison’s assumption about the invariable correlation between puberty and nubility, social and economic factors, not changes in the physical body, had long indicated readiness for marriage and the concomitant achievement of maturity.3 Moreover, in the hierarchical social order of the Colonial period, law and custom clearly denoted youth as a dependent and inferior stage.4 As with other social groups during the early nineteenth century, the formation of a democratic society and the spread of a market economy threw the subordinate status of youth into question. Social, economic, and cultural changes provided unprecedented opportunities for the young to assert their independence from old forms of adult control. At the same time, these forces gave rise to a constellation of imperatives that were rendering the young immature for longer periods of time. During the first half of the nineteenth century, currents in medicine, education, and religion came together around the newly emerging idea of development to define “youth” as a stage in the life cycle commensurate with these changes. Dunglison’s definitions emphasized what postpubescent young people were not, that is, physically, sexually, psychologically, or socially mature. Left unanswered were important questions about the quality of this “age,” the process by which the immature child grew through it to become the mature adult, and whether—and in what ways—various social groups would be constrained by its limits or be able to lay full claim to its privileges.

Historians have used the terms “semidependence” and “semiautonomy” to characterize the often stressful vacillation of those in their teens and early twenties between attachment to family and community, on the one hand, and personal freedom, on the other.5 Joseph F. Kett associates this struggle primarily with the white middle-class male and notes, with some regret, that the balance weighed more heavily toward the boy’s and young man’s increased dependence as the century wore on. Although he contends that the modern concept of adolescence was formulated at the turn of the twentieth century to describe and prescribe the experiences of white middle-class boys, he also suggests that white middle-class girls were the “first adolescents.” Declining need for their household labor, new educational opportunities, and widespread cultural anxiety about precocious female sexuality, he argues, came together during the first half of the nineteenth century to render girls the first group of young people to be both free from adult responsibilities and subjected to adult protection and supervision, two qualities that characterize modern adolescence.6 More recently, Jane H. Hunter’s insightful study of Victorian teenage girls’ lives has shown that girls, too, were pulled between duty to the family and a longing to cultivate the individual self. However, by century’s end, girls achieved more independence from their “little women” status and its perpetually circumscribed domestic sphere. By comprehensively examining girls’ experiences at school, as part of a peer culture, in the family, and in relation to mass culture, Hunter affirms and expands on Kett’s claim that “[i]n defining this modern life-stage [of adolescence], girls led the way.”7

An emerging discourse of development helped give meaning to and shape the experiences of male and female youths during the first half of the nineteenth century, and its interpretations can be located within the tradition of modern scientific thought on the concept of adolescence. In this period, ideas about child development can be found in novels, domestic advice manuals, pedagogical treatises, and religious tracts.8 Texts by physicians, with recommendations on caring for infants and young children, a precursor to pediatrics that would emerge in the late nineteenth century, also had growing cultural influence.9 This chapter focuses on physicians involved in the popular health movement of the 1830s through the 1850s who were particularly self-conscious about articulating developmental principles and norms and explaining “youth” as a stage in the life cycle. As they paved the way for new directions in professional medicine, these doctors departed from orthodox medicine’s reliance on heroic therapies and, instead, attended to the role of hygiene and prevention and to the interrelationships among the body, mind, and spirit in shaping health and disease. Hygiene advocates, drawing from and contributing to the broader reform climate of the antebellum years, were optimistic about human nature’s tendency toward perfection and the social progress that might follow. Their emphasis on self-improvement led many of them to consider the growth process, and their thinking about it dwelled on many themes that would remain important in the subsequent investigations of child development. Health reformers recognized childhood and youth as distinct periods of life; pondered the relationship between nature and nurture in development; described how physical, mental, and moral development interacted; and prescribed how these should proceed to maximize individual health, happiness, and potential, as well as to ensure the stability and progress of the newly ascendant democratic and capitalist social order.10

Health reformers’ goals for development—their expectations for what the growing child was progressing toward—were rooted in Enlightenment notions of liberal individualism, particularly as espoused by philosophers John Locke and Jean-Jacques Rousseau. For Locke and Rousseau, reason, autonomy, self-control, and virtue, qualities they exclusively attributed to the elite adult white male, characterized the mature self. According to Locke, youth was the stage of life when the boy tried and tested these qualities and decisively asserted his independence over the parental controls of his childhood. In contrast, Rousseau conceived of growth as a more gradual process, prescribing a longer period of youthful dependence to ensure the proper incubation of the qualities of mature selfhood the boy would eventually assume. Despite these differences, together Locke and Rousseau positioned the boy as the “first adolescent” of developmental thinking, positing that he alone passed through a stage that moved him out of the dependency of childhood and prepared him to enjoy the prerogatives and to assume the responsibilities of the mature individual.

For their part, American health reformers were uneasy about Locke’s recommendations for early youthful independence; they instead embraced Rousseau’s model of protracted and protected growth throughout the teen years. As a result, the girl came to figure prominently in their thinking about the developmental process. She was foremost in the minds of health reformers who sought to explain the relationship between the development of the body and the mind, especially those who argued that during childhood cultivating physical strength took precedence over “forcing” mental prowess. Girls were also particularly useful for explaining and illustrating the precept of gradual growth that applied to all children because the dangers of their sexual precocity seemed self-evident. However, discerning and explicating the “universal” laws of development in no way precluded health reformers from furthering prevailing cultural expectations for dichotomous sexual difference. Indeed, most conceptualized the “age of puberty” in such a way as to affirm that youth was a stage of life whose greatest privileges exclusively devolved to the boy. Nonetheless, some health reformers challenged this view. Without fundamentally undermining the ideology of separate spheres, they held out expectations for female development that contended that girls were equally entitled to and capable of experiencing a prolonged period of growth that would not only preserve their purity but also allow for the full cultivation of their physical, mental, and moral powers, thereby enabling their active participation in both the private and public realms of democratic society. As health reform advocates outlined them, the natural laws of development governed all young people, boys and girls, working and middle class, in similar fashion. At the same time, their conception of the stage of life that would come to be known as “adolescence” played an important role in the construction and enforcement of categories of social, and especially sexual, difference.

GIRLS AND BOYS GROWING UP, TO 1860

Antebellum conceptions of development and youth emerged out of a web of economic, social, and cultural changes that had been reshaping the experiences and meanings of growing up for girls and boys since the end of the seventeenth century. Institutional age grading was not a prominent feature of Colonial society. Indeed, the structures and rhythms of preindustrial work and play kept the young of all ages in proximity to one another and to adults. Nonetheless, Colonial Americans recognized broad distinctions among the stages of life, and, perhaps most significantly, the superiority of adults over minors was a primary axis organizing relations of power in their hierarchical social order.11 In Colonial New England,12 the first stage of life, infancy, was comparatively brief, marked by complete subordination and dependency for girls and boys, although not necessarily lacking in nurturing and love. By age 7, boys donned adult clothing for the first time, and by about age 10, boys transitioned into extended stages of childhood and youth wherein they engaged in productive labor on the farms or in the shops of their fathers or masters, to whom they were bound out as apprentices or indentured servants. At the same age, girls began receiving training from their mothers or other mistresses to prepare for their future domestic roles and responsibilities. During this preparation, young people were expected to acquire greater proficiency with adult tasks and responsibilities, while deferring to the external authority of their parents, masters, and mistresses. In return, adults were obligated to protect and to support the young in their charge, as well as offer clear role models for adult identity, behavior, and occupation.13

In this Puritan social order governed by the principles of hierarchy, patriarchy, and mutuality, maturity entailed an assumption of authority, obligation, and deference in relation to others, the particular mix of which was determined by one’s rank and gender. Boys’ arrival at manhood was signified by some combination of land ownership, proficiency in a trade or profession, marriage (on average at age 25), and the establishment of an independent household. For them, maturity conferred greater freedom, control, and responsibility in relation to minors and women, although even men of the highest positions of social rank continued to owe obedience to God. Girls also anticipated that maturity would bring marriage (usually not before age 20), the creation of a new household, and the assumption of greater domestic responsibilities, including the command of children and servants. Maturity for girls also meant continuing their dependence on adult men, however, as they extended or transferred deference from father to husband.14

In part because of the establishment of stable social institutions, particularly the strength of the patriarchal family, youth was experienced and perceived as a relatively smooth period for much of the seventeenth century in the New England colonies, marked by the assurance of one’s current status and role and the clarity of the social position and identity one would assume as an adult.15 Such a condition contrasted with the youth in early modern Europe, where social instability led to the rise in both rebellious youth cultures and widespread anxiety about youth as a perilous period in the life cycle. Puritan theologians most prominently exhorted against the dangers of youth. Drawing on the humoral model of the body from classical medicine, in which puberty in both sexes was perceived as a period of physical disruption, marked by an excess of heat and lust, they characterized boys and girls in their teens as sensual, heedless, willful, and prideful; argued that youth needed discipline and monitoring; and urgently called on the family, the church, and the university (for boys) to fulfill the social control function.16 At the end of the seventeenth century in New England, rapid population growth, increasing geographical mobility, and commercial development and expansion weakened patriarchal authority and undermined the exceptional harmony in age relations that had characterized the initial Puritan experiment. As court records reveal, some young people chafed against their subordinate status and resisted adult rule. As a result, they were now volubly condemned by religious authorities in the New World, who joined their English counterparts to pronounce against the wickedness of youthful sensuality and to warn of ensuing social chaos if their wickedness were not restrained, primarily through the mechanisms of conversion and church membership.17

As historian John Demos explains, beginning in the middle of the eighteenth century and escalating during the decades following the American Revolution, the experiences of youth became even more “disjunctive and problematic,” marked by “new elements of [social and psychological] stress.”18 The transformation to a democratic polity and a market economy upset the vertical social order that had enforced the principles of hierarchy, patriarchy, and mutuality in age (as well as gender, class, and race) relations. The adult roles and identities to which youth might aspire, along with the routes by which they might be acquired, proliferated. The changing political and economic order set new standards for the mature self based on the capacities for rationality, autonomy, and self-control, qualities embodied by the adult white male of the emerging middle class.19 Left unresolved, however, were the limits of individualism and to what extent various social groups would be able to assume its rights, prerogatives, and responsibilities. In this context, young people in their teens and early twenties faced both novel opportunities to exercise independence from adult control and renewed requirements that they depend on adult support and defer to adult authority. While some of these changes united the young around common experiences, how the balance and tension among autonomy, (inter)dependence, and subordination manifested in the lives of youth also diverged along lines of gender, class, race, ethnicity, and geographic location.

The changing social order of the new republic wrought transformations in work; educational, reform, and religious institutions; family life; and peer relations that affected different groups of young people in varying ways. From 1780 to 1840, as modes of capitalist production and exchange penetrated the rural and urban economies of the Northeast, white girls of that region saw their traditional work roles in the home undercut by new manufacturing technologies and the shift from family-oriented production to a market economy. Many joined their mothers and young male siblings in producing goods under the outwork system, weaving cloth or braiding palm leaf for hats at home under parental supervision. Many were also likely to spend some portion of their teenage years working for wages outside the home, in teaching, or, as in the case of the Lowell mill hands, in factory work.20 Historians disagree about the meaning of such vocational opportunities for girls’ lives. At the same time that girls from middling families were leaving home to pursue wage work, their mothers’ lives were increasingly associated in ideology, if not always in practice, with the separate sphere of the private home, economic dependence, and the “feminine” values of piety, purity, self-sacrifice, and compassion.21 The Lowell experience and teaching may have afforded greater variety, mobility, and personal freedom for some girls than their female counterparts in the past or adult women in the present; however, girls in the workplace were expected to follow the same proper feminine behavior required of their mothers. In addition, girls’ wages most often remained embedded within a family economy, and they anticipated a return to domestic responsibilities when they married.22 Moreover, girls’ own assessments of the relationship between wage work and domesticity were most likely mixed. Indeed, whether the girl looked forward to or dreaded the prospects of “woman’s sphere” following the relative freedom of work depended on whether she saw marriage and motherhood as raising her status or restricting her opportunities.23

In a society increasingly stratified by class difference, working-class girls also took jobs that set them apart from their mothers, though with less sanguine consequences for their lives. Native-born white and then immigrant girls who met the growing demand for urban domestic servants performed long hours of menial labor under their employers’ scrutinizing supervision. For many girls, almost any other form of wage work was preferable to domestic service and the social subordination, economic insecurity, and risk of sexual abuse that went along with it.24 Urban outwork and manufacturing jobs, such as those in the New York garment trades, brought risk and hardship to girls trying to make their way alone in expanding urban economies, where freedom from familial oversight meant the loss of an important source of protection and support.25 For enslaved girls, the teenage years were marked by increased vulnerability to masters’ sexual exploitation and forced labor that remained largely undifferentiated from their mothers’ work and bore no connection to the capacity for personal independence.26

In addition to changes in their work lives, white middle-class girls, in northern states, especially, took advantage of expanding educational opportunities during the early nineteenth century. The free public school systems established in the north in the 1830s attracted such large numbers of girls ages 5 to 12 that, by midcentury, the United States claimed one of the highest female literacy rates in the world. Managers of the Lowell mills encouraged the trend by hiring workers with school certificates.27 With the influx of immigrant labor after 1840 and the accompanying rationalization of the factory system, white middle-class teenage girls’ participation in the manufacturing workforce was undermined, both by labor competition and the attendant decline in respectability associated with this kind of work. With such forms of wage earning denied them and their traditional domestic functions rendered obsolete because of declining birth rates, the availability of immigrant domestic servants, and changes in the technology of housework, these girls increasingly made their way into the new urban public high schools, private boarding schools, and teacher-training regimens at normal schools and female seminaries. For the rest of the nineteenth century, these girls attended school in greater numbers than their middle-class brothers or working-class girls and boys.28

Urban middle-class parents sent their daughters to school for many reasons: to occupy them during the years between puberty and marriage; to prepare them to assume the elevated domestic and maternal roles of Victorian womanhood; to facilitate and to formalize the project of self-improvement and refinement at the heart of middle-class self-definition; and to display the family’s class status and claim the respectability associated with it. Historians have found that privileged parents were the least likely to acknowledge an economic rationale in supporting a daughter’s education. Given the economic volatility of the times, however, unspoken motivations for educating middle-class girls were that they would be better equipped to make a financial contribution to their family of origin and, if needed, to their family when they married (primarily through teaching); in a worst-case scenario, middle-class girls would even be able to support themselves. Whatever parental or institutional intent, though, schooling frequently created inner conflict and familial tensions, as both school structure and curricular content promoted female deference to class-specific gender norms and enabled girls to imagine and experience the self as a rational, controlled, and assertive being.29

Lower-class girls sometimes found themselves affiliated with institutions that in no way sanctioned youthful female autonomy. The New York House of Refuge, the first institution for juvenile delinquents in the United States, was founded in 1825, with separate departments for girls and African Americans. Three years later, the House of Reformation in Boston and the Philadelphia House of Refuge were established. In 1856, Massachusetts founded the Lancaster Industrial School, the first reform school for girls. Along with the public schools, these institutions helped to formalize the growing importance of age grading in American society. They also furthered the expanding cultural recognition of childhood and youth as life stages with distinct characteristics and discrete needs, deserving of adult protection but also subject to adult control. Most girls were brought to these institutions, either by reformers or by their parents, because they had committed, or were thought to be likely to commit, a sexual offense. Inclined to see all girls, even misbehaving ones, as essentially innocent, antebellum reformers geared their efforts toward the moral uplift and practical training, rather than the punishment, of female delinquents. Nonetheless, in removing girls from their families and carefully circumscribing their behavior in institutional settings, they also helped to set the limit for claims on the privileges of individualism, a line that poor, neglected, and “deviant” girls were clearly not to transgress.30

Religious institutions and practices also occasioned new opportunities for both the erasure and assertion of the self for teenage girls. Girls across the social spectrum embraced the revival spirit of the Second Great Awakening and, along with their mothers, helped to account for the majority of converts in the years from 1790 to 1840, as well as for the majority of church membership after that. Evangelical Christianity promulgated a religion of the heart, equally accessible to all who would surrender themselves to God’s love and mercy, which was to be received initially in the form of a spontaneous and emotional conversion. Both proponents and critics of the revivals emphasized the gender and immaturity of converts, thereby fostering an association, whether positive or negative, between this type of volatile, expressive religiosity, femininity, and youthfulness. For some girls and women, conversion was surely an intense and anxiety-provoking experience that also heightened their sense of passivity and dependency. However, many also deliberately chose the evangelical message because they found in it a respect for femininity and recognition of the moral importance of the roles of wives and mothers in family and society. Moreover, many took the message further, out from the home and into the world, and used it to justify pursuing an active life of missionary work or to participate in social and moral reform efforts.31

Accompanying and enabling these economic and institutional developments during the early to mid-nineteenth century were changes in the structure and function of family life, which had important implications for the social and psychological lives of teenage girls. With the birth of the democratic republic and the expansion of the market economy, the urban white middle-class family was increasingly defined and experienced as an isolated conjugal unit, set apart from both larger kin and community relations and the workworld.32 The rise of economic specialization, the spread of the wage system, and the improvement in standards of living rendered the economic labor of middle-class wives both invisible and obsolete. The father was now the family’s designated sole economic provider, and the mother was the primary nurturer of children in the private home. As a strategy to protect and promote class status and as a reflection of women’s growing domestic influence, middle-class couples consciously limited their fertility, and family size within this group declined over the course of the century. Smaller families made intensive mothering possible, which for girls and boys of all ages meant that their physical health and emotional well being received greater attention. Moreover, such close family relationships were extended over the course of the life cycle. Because girls delayed marriage to work or to attend school, and because they exerted greater control over choosing marriage partners, they remained primarily connected to their parents’ households into their early twenties.33 For girls, the psychological, emotional, and social ramifications of these changes were mixed. The private, affectionate family encouraged female selflessness by fostering intimacy among its members, especially with mothers, and valorizing the feminine qualities of empathy, compassion, and service. It also, however, promoted awareness and sometimes an assertion of the self as a separate, even special, being. In addition, the domestic family mandated prolonged, more careful, and sometimes invasive adult supervision and monitoring of girls’ lives, which raised new possibilities for emotional coercion and conflict among family members.34

Close connections with mothers and other adult female kin shaped many girls’ lives during this period, reflecting and reinforcing the continuities and similarities in the female experience across the life span.35 Girls’ friendships with peers and their participation in a distinct girl culture, however, indicated that female identity was becoming a product of age as well as gender. In female seminaries and the Lowell mills, for example, girls cultivated relationships and sought experiences that not only preserved qualities and values appropriate to adult femininity but also allowed them to fashion the meaning of such values and to experiment with their limits in their own lives. Through their romantic friendships, for example, girls considered how much space intense feelings might consume in their daily lives as girls and, later, as women. In making decisions about spending factory earnings and leisure time, they renegotiated the boundaries between self-sacrifice and self-assertion during this period of their lives.36 Through their immersion in popular literary culture, as well, girls identified with one another and wrestled with their culture’s multiple interpretations of the unique problems and possibilities of becoming a woman. Far from defining girlhood with a monolithic voice, periodical literature and novels of midcentury let girls know what was expected of them at this and subsequent stages of their lives and also opened up possibilities for them to respond to those expectations in voices of their own.37

Boys in their teens and early twenties also experienced competing pulls between freedom and subordination as a result of the socioeconomic changes of the late eighteenth to the mid-nineteenth century.38 Boys’ experiences with work, schooling, family life, and peer culture were also differentiated by race, class, and gender. The changes wrought by the market revolution were fundamental in transforming the meaning of youth for boys. By the 1790s, in the urban Northeast, capitalism was beginning to destroy traditional craft production and the apprentice system along with it. With the spread of the outwork system and the rise of industrial technologies, craft labor was being divided into discrete tasks. At the same time, to ensure profitability in the face of market competition and unpredictability, masters replaced long-term, personal, contractual labor relationships with more flexible ones. As free wage labor became more widespread, employers also replaced experienced journeymen with untrained, cheaper juvenile “helpers.” In trades from printing to shoemaking, apprentices no longer lived with masters’ families or received education or clothing as part of their indenture. Instead youths, or their parents, received cash payments in return for their labor.39 During the first decades of the dynamic post-Revolutionary economy, some male youths learned to use new technologies more quickly and proficiently than their masters, and in capitalizing on their entrepreneurial spirit, they assumed an early independence as they made their way into the ranks of the new middle class.40 In the meantime, bound labor was increasingly debased and its subordinate status reinforced, associated as it was with black men in or moving out of slavery and with orphaned and destitute children and youths, who continued to be “placed out” as they had in the Colonial period by those private charities and public agencies assuming responsibility for their welfare.41

The continued, if uneven, deterioration of apprenticeship and the mounting prominence of the wage system in the 1820s and 1830s marked a growing class divergence among male youth.42 Although parents across the social spectrum had long relied on the work of children and youth to maintain or enhance the household economy, some were now so threatened by the vagaries of the new economic relations that their children’s wage labor became essential to family survival.43 This was especially true of Irish and German immigrant parents, arriving in the 1840s and 1850s, who sent their children to work in factories or “sweated” them in cellar and garret shops of their own. While boys working for wages were less constrained by the family claim and were permitted a greater measure of social and sexual freedom than their working-class sisters, the wage system gave rise to new insecurities and exacted new forms of deference and restraint for those on the economic margins. Moreover, enthusiasts of houses of refuge and reform schools made little distinction between poor, neglected, and “troublesome” boys. Boys who came under their care were subjected to stringent modes of adult supervision for their protection and the preservation of social order.44

White middle-class boys, whose families could afford to forgo their wages, pursued many paths en route to autonomous manhood, a process that entailed, in the words of Joseph F. Kett, “a jarring mixture of complete freedom and total subordination.”45 The tension had its roots in middle-class male childhood, when boys moved between a domestic culture governed by the values of affection, mutuality, duty, and restraint and a boy culture characterized by aggression, competitiveness, irresponsibility, spontaneity, and independence.46 When the middle-class boy reached his teenage years, he faced the daunting yet exciting prospect of choosing an occupation. Boys continued to depend on their families for economic and emotional support. However, they also were compelled to rely on themselves, exercising initiative, making decisions, and cultivating abilities beyond the immediate direction and control of others. Thus, during this stage, which extended well into his late twenties, the youth might reside for a time with his family of origin; move to a city to live alone or with kin; continue his studies at a high school, academy, or college; or receive some sort of vocational training in his anticipated career. Whatever route, or combination of routes, he took, the possibilities for both means and ends and the vacillation between dependence and independence such possibilities occasioned frequently generated uncertainty, restlessness, and ambivalence in the growing boy.47

The unsettled status of male youth set the stage for familial and social conflict. Parents’ financial sacrifices to secure the boy’s education could cause tension among family members. Parents were also susceptible to bewilderment as they tried to determine the best way to influence their sons’ career and character development.48 In the broader society, tensions over the relative freedom of middle-class male youth peaked at such colleges and universities as Yale, Brown, and the University of Virginia. School officials tried to rein in youthful independence by publishing detailed conduct books carefully cataloguing rules boys were expected to follow regarding study, dress, recreation, and, especially, respect for authority. Boys caught transgressing these strictures faced warnings or expulsion. Careful regulation of schoolboys dates back to the Colonial era, when the subordination of youth and widespread suspicion about the heedless and irresponsible behavior of boys in groups were assumed. For their part, students earned their reputation as troublesome by carrying out pranks, rioting, fighting, harassing and abusing school officials, and, even in one case, committing murder. Although they were certainly a reaction to the repressive collegiate environment, student rebellions of the early nineteenth century also reflected the new, if partial, experiences of and expectations for freedom that had taken hold in the lives of middle-class boys.49

Whether they were at school, in rural communities, or in more anonymous urban settings, teenage boys and young men relied on one another for help in coping with the insecurities of coming of age. Male youth culture incorporated attributes of the boy the youth once was with the man he would become, channeling the impulses of the former toward goals appropriate to the latter. This set boys apart from the adult world and helped to facilitate their movement into it. Voluntary associations, including religious groups, military companies, self-improvement societies, and civic organizations, captured the attention of young men during this period. These organizations, often created by young men, allowed youths to indulge in competitive play with friendly comrades on their own terms, while offering them a chance to obtain knowledge, hone skills, and exercise values central to adult male identity.50 Intimate friendships with male peers were also were also important during this stage of life, though, unlike female friendships, most lasted only for the duration of youth. Boys were expected to outgrow the dependence and individualized compassion such relationships required. Emotional expressiveness and vulnerability were reserved for marriage, while self-containment, detachment, and restraint characterized relations among men.51

The decades following the birth of the American republic thus marked an important transition for youth. As historian C. Dallett Hemphill shows, the transformation to a democratic polity and a market economy led to a shift from a “hierarchical to a horizontal social order” that for young people of the white middle class, in particular, portended a rise in their status. While children were decidedly subordinate to adults in the new republic, even as they were treated with greater benevolence than before, white middle-class youths were integrated into the adult social world. They were presented with opportunities and challenges to acquire the capacity for self-government, rational choice, and individual initiative to ensure the success of the democratic order and the free-market economy, as well as to secure their eventual hegemonic position in both.52 Even as the status of such youth began to rise, however, it was called into question. What were the limits of social equality in the relationship between adults and youth? What position and posture of youth would best serve the interests of a rising middle class? How important were gender differences in youth to establishing those limits and securing those interests? Young people learned for themselves there were no easy answers. New opportunities for girls to engage in factory work, for boys to select careers, and for youth of both sexes to choose marriage partners entailed a degree of autonomy unknown to youth in the hierarchical, patriarchal society of the Colonial period. At the same time, the emergence of age-graded institutions and the rise of the affectionate family reasserted the (inter)dependent status of youth, even as they both opened up possibilities for the young to craft autonomous cultures from which they might challenge new forms of adult control. While youth had long been recognized for its liminal nature, as a bridge linking two other stages of life, its in-between quality took on new meaning during the early nineteenth century. It referred to not only the transitional function of the life stage but also to the contradictory experiences and ambivalent expectations regarding the capacity for independence that such a transition now entailed.53

THE EMERGENCE OF THE DEVELOPMENTAL PARADIGM

The problem of youth’s liminality was raised in the explanations of the growth of the child offered by participants in the early-nineteenth-century popular health movement. This group of unorthodox physicians and their lay supporters named and assuaged cultural anxieties about youth as a critical stage in the life cycle. They drew from and built on a broad Euro-Anglo American intellectual context marked by a focus on the meaning and significance of developmental processes. It was this context that prepared the way for Charles Darwin and, paradoxically, for the subsequent widespread acceptance of the “non-Darwinian” view of all forms of development, including child development, as orderly, linear, purposeful, progressive, and even divinely ordained.54

The division of human life into successive stages, as well as speculations about phylogenetic evolution, can be traced to ancient times. However, early notions of the occurrence of change in the individual, as well as in the natural and social worlds, were largely viewed as static, as the manifestation of that which had been present all along. Thus, Medieval Christians envisioned a depraved adult existing in full, although miniature, form in the mind of the child, marking differences between the child and the adult in degrees rather than in kind. In the eighteenth-century science of embryology, this view was expressed, albeit with greater nuance, as “preformationism,” the idea that complicated organisms unfolded out of preexisting entities in the sperm or egg. In addition, beginning with Aristotle and persisting into the eighteenth century, the concept of the Great Chain of Being described a fixed, hierarchical order with all the beings of the universe occupying unchanging positions within it.55

From the eighteenth century and into the mid-nineteenth century, a confluence of influences from philosophy, theology, and science contributed to a new formulation of development whose broadest tenets were dynamic change, relational continuity, and faith in progress. Republican and free-market ideologies challenged the fixed, hierarchical social model, replacing it with a fluid, egalitarian model rooted in the prospects for individual freedom, responsibility, and potential. The teachings of liberal and evangelical Christianities, along with the reform movements they inspired, unified promises for individual and social salvation by proclaiming that perfectible human beings had the power to realize the perfection of God’s kingdom on Earth. The dynamic, dialectical theory of history put forth in the ideal philosophy of Georg Hegel and the material philosophy of Karl Marx advanced the idea that any phenomenon must be explained in terms of its role in a continuous historical process. The group of German romantics known as the “nature-philosophers” posited the existence of a fundamental unity among all natural phenomena, deemed progressive, systematic change to be a general process in Nature, and speculated about a correspondence between individual development and the development of species. A dynamic concept of development also emerged as the centerpiece in geological research, most prominently in the work of British geologist Sir Charles Lyell, as well as in early theories of biological evolution, most notably those of French botanist and invertebrate zoologist Jean-Baptiste Lamarck. In embryology, the epigenetic view, wherein undifferentiated organic structures grew sequentially into more complex ones through a process of dynamic change, successfully challenged preformationism.56 In physiology, the work of British, French, and German scientists on cell formation and activity provided what historian Carolyn Steedman termed “a series of imaginative and figurative paradigms for describing individuals, time and change” that also made important contributions to the organic, teleological model of development that would dominate for the rest of the century.57

Meanwhile, new possibilities for postnatal individual development were entertained under the continuing intellectual influences of the enlightened rationalism of John Locke, which emphasized the malleability of the child, and the romanticism of Jean-Jacques Rousseau, which recognized the uniqueness and promise of child nature.58 Locke’s ideas about human nature, the family, and child rearing, as explicated in his Two Treatises of Government, Essay Concerning Human Understanding, and, especially, Some Thoughts Concerning Education, all published during the 1690s, took hold in the American context during the mid-eighteenth century as Americans struggled to define a paradigm for the self appropriate to a changing political and economic order. As historian Jay Fliegelman explains, “In a new political world in which government was to exist for the governed, the educational paradigm would provide a new model for the exercising of political authority.”59 At the foundation of Locke’s philosophy was the premise that human nature was both fundamentally reasonable and inherently self-interested. Countering Calvinist notions of the child as inherently depraved and monarchical assumptions about the fixity of the status of the individual at birth, Locke maintained that children’s minds and characters were pliant and that right concepts and attitudes could be instilled in them through an education that fostered the development of the power of reason. His prescriptions emphasized the primacy of nurture and fostered recognition of the difference of child nature. “When I talk of reasoning,” he clarified, “I do not intend any other but such as is suited to the child’s capacity and apprehension. Nobody can think a boy of three or seven years old should be argued with as a grown man.”60 He also advised individual parents to discover the nature of their own particular child and adjust their educational regimens accordingly. All parents, however, shared the same goal in child rearing: the cultivation of a rational, self-governing, autonomous adult whose self-interest and concern for the common good were completely compatible. In Locke’s scheme, that end was best achieved by parents requiring strict obedience from young children to curb their natural, but often excessive, longing for liberty. Ideally, this would be done by appealing to the child’s reason rather than relying on corporal punishment. It was also to be accomplished by granting or withholding parental affection. In addition, parents were to exercise careful control over their children’s physical habits, inculcating the capacity for self-denial by exposing them to fresh air, cold baths, and simple diets. Under the influence of these child-rearing methods, children would come to obey their parents and, in the process, internalize the normative social values parental strictures enforced, not out of fear or blind submission but because they would understand it was in their best interest.61

Once reasoning was established (at about age 12 or 13) and control over individual desires and appetites became habitual, parents would relinquish their external authority, and dependent childhood would give way to independent youth. Locke envisioned youth as that period when the individual experiments with newly acquired capacities for rational choice, self-control, and purposeful freedom.62 Addressing fathers about how to treat their sons who had made it through childhood, he advised: “The sooner you treat him as a man, the sooner he will begin to be one: and if you admit him into serious discourses sometimes with you, you will insensibly raise his mind above the usual amusements of youth, and those trifling occupations which it is commonly wasted in. For it is easy to observe, that many young men continue longer in the thought and conversation of schoolboys, than otherwise they would, because their parents keep them at that distance, and in that low rank, by all their carriage to them.” By treating their teenage sons as equals, fathers would be able to secure their sons’ friendship. Paternal authority thus would be established more decisively because it would be founded on expressions of love and esteem, not coercion. Unlike children, whose reason was not fully formed, youth would learn that their happiness depended on wanting for themselves what their parents and society expected of them. Therefore, according to Locke, the competing values of individual freedom and social order would be balanced and secured.63

Rousseau’s romantic concept of the developing child, as presented in Emile, or On Education (1762) influenced some American child-rearing advisors by the 1790s64 and furthered some Lockean notions while challenging others. Rousseau emphatically asserted the existence of an essential child nature that adults were to discern and education was to follow. He rejected Locke’s prescriptions for stern discipline and for thwarting desire in favor of a childhood of self-motivated activity, exploration, and enjoyment. Nonetheless, as the ubiquitous presence of Emile’s tutor made clear, Rousseau also proposed that a natural development could only be facilitated by nurture, however carefully adult direction might be disguised. Herein lay the fundamental paradox of all nature-based theories of development. After all, the task of determining the qualities of the child’s nature ultimately fell to the child’s adult governors. Thus, while Emile feels free to follow his own inclinations, his tutor has already manipulated his environment so that Emile’s interests unfold according to a larger, predetermined social purpose.65

For Rousseau, as well as for Locke, that ultimate purpose was to transform youth into an autonomous adult, whose concern for the common good would further the stability and progress of civil society. Whereas Locke believed this could be achieved by granting youth a measure of independence early on, Rousseau made the case for a prolonged, youthful dependency. Both Locke and Rousseau deemed the young to be essentially innocent and sought to protect them from the corrupting influences of society. For Locke, though, this meant exposing the youth to the dangers of the environment so that he might apply his powers of reason and self-control to resist and overcome them. Rousseau was more cautious about releasing the youth into the world, in part because he considered the ways that the physical changes of puberty potentially compromised youths’ ability to exercise higher powers of the intellect, the will, and the heart. To Rousseau, youth was a “second birth,” incited by the maturation of the reproductive organs and the accompanying onset of sexual feeling. He warned that the nascency of the sexual passion had the potential to ignite a “stormy revolution” in feeling, thought, and behavior that could overwhelm the youth, dangerously rendering him sensual, irrational, rebellious, enervated, and passive.66 However, he was also careful to emphasize that such perils were less determined by nature than by a corrupt education and precocious experience. “Nature’s instruction is late and slow,” he contended, whereas “men’s is almost always premature.” Hence, “learned and civilized” people arrived at puberty earlier than “ignorant and barbarous” ones, whose moral “simplicity” enabled them to remain “peaceful and calm” well into the period of their youth.67 His prescription: that every effort be made to slow the youth’s development and follow the gradual course that nature intended. Hereby would Lockean ends still be secured, albeit by quite different means. That is, only by way of a protracted, steady development that did not test the youth’s capacities too soon could the newly emerging sexual energy impart the “vigor and force” to the body, mind, will, and heart that would enable the complete development of the free individual who was ready and willing to serve the public interest.68

In seeking to reconcile the rights of the individual with the imperatives of the social bond, both Locke and Rousseau highlighted the significance of youth as a stage of life. While Locke regarded the child as morally neutral and Rousseau granted that the child possessed an inherent sense of justice, neither went as far as the eighteenth-century Scottish common-sense philosophers who posited the existence of a moral sense from birth and attributed to the child a natural disinterested benevolence.69 To both Locke and Rousseau, youth was an important stage of life because the self-centeredness of childhood gave way to the self as a social being. For Locke, goodness throughout life was motivated by self-interest that was discernible by reason. As one’s capacity for reason increased with the advancement of childhood, so too did one’s capacity for virtue. Rousseau deemed that goodness originated in the instinct for self-love rather than in rationally calculated self-interest. Exceeding Locke in valorizing the youth’s acquisition of conscience and capacity for selfless compassion, he described the boy at adolescence as “the most generous, the best, the most loving and lovable of men.”70 Primed by the communal imperatives of the sex instinct, the boy who followed nature’s dictate for gradual growth would spend his youth acquiring the feelings of benevolence, empathy, and pity that were to form the foundation of the affectionate family and the democratic state: “Little by little the blood is inflamed, the spirits are produced, the temperament is formed. The wise worker who directs the manufacture takes care to perfect all his instruments before putting them to work. A long restlessness precedes the first desires; a long ignorance puts them off the track. One desires without knowing what. The blood ferments and is agitated; a superabundance of life seeks to extend itself outward … One begins to take an interest in those surrounding us; one begins to feel that one is not made to live alone. It is thus that the heart is opened to the human affections and becomes capable of attachment.”71

With Locke’s sons and Rousseau’s Emile as the explicit subjects of their educational treatises, these philosophers laid a solid foundation for subsequent developmental thinkers’ rendering of the boy as the normative adolescent. Both intended their prescriptions for the rearing of young children to apply to boys and girls alike, contending that there were no salient differences between the sexes during childhood. Youth, however, was a stage of life unique to male development. Whether by way of an early exposure to the world or of a more protected, protracted growth, only boys outgrew the dependence of childhood and achieved the ultimate aim of development: the formation of an autonomous, self-governing individual capable of making a contribution to the common good. Rousseau located the cause of sexual difference in nature, asserting that it was the changes that occurred in the body at puberty that set boys and girls apart and that propelled boys on their way to cultivating the highest powers of the intellect, the will, and the heart. “Up to the nubile age children of the two sexes have nothing apparent to distinguish them,” he declared. “Everything is equal: girls are children, boys are children; the same name suffices for beings so much alike … And women, since they never lose this same similarity, seem in many respects never to be anything else. But man in general is not made to remain always in childhood. He leaves it at the time prescribed by nature; and this moment of crisis, although rather short, has far-reaching consequences.”72

Whereas Locke barred girls from the privileges of youth (and by extension, mature adulthood) simply by ignoring them altogether, Rousseau took a different tack. In his account of the education of Emile’s future wife, Sophie, he excluded girls by offering up an account of their developmental difference. For Emile, the onset of puberty, if properly managed through channeling the sex instinct toward higher education, self-control, and concern for the common good, inaugurated the capacity for self-determination and social creation—“far reaching consequences,” indeed. For Sophie, the emergence of the reproductive powers channeled her into a predetermined, generic female destiny, as wife and mother in the domestic sphere. Rousseau granted Sophie some measure of rationality, autonomy (in the choice of marriage partner), conscience, and moral influence within the private family, which was rooted primarily in her instinctual sexual modesty. He emphasized, however, the ways that her distinct female nature and the education that followed from it rendered her Emile’s complement but not his equal in the pursuit of the rights of the individual or the responsibilities of citizenship.73 Although the approaches of both philosophers to the problem of the teenage girl differed, their expectations for development were much the same: Only boys experienced youth as a stage of life; only they emerged from it poised and permitted to claim the full entitlements of maturity in a democratic social order.

GROWTH AND DEVELOPMENT IN THE POPULAR HEALTH MOVEMENT

Participants in the popular health movement of the 1830s through the 1850s played a leading role in conveying and construing these many currents of developmental thought for parents, educators, and young people.74 While diseases plagued many Americans and theories and therapies of eighteenth-century heroic medicine were increasingly under public and professional attack, several disparate groups of unorthodox physicians and their lay supporters sought to restore Americans’ health through programs for better hygiene. Within medicine, the turn to hygiene was propelled by research from the Paris, or Clinical, school, which stressed the empirical evaluation of disease and treatment. It was also furthered by advancements in the science of physiology. Both regular and unorthodox early-nineteenth-century physicians increasingly relied on a growing body of scientific knowledge about the function and organization of the body’s tissues, organs, and systems, whose workings were perceived to be directed by predictable laws of nature. Of interest to physiologists in this period was the process of bodily growth and development.75 Such physiological knowledge was not monolithic. Academic physiologists and medical practitioners disagreed about the functioning of respiration and digestion, the performance of the nervous system, and whether materialism or vitalism explained the origins of life forces. Health reformers occupied a range of positions in these debates, with some quite closely aligned with orthodox medicine and others more decidedly on the margin of medical theory and practice. For their part, most orthodox physicians dismissed all types of health reform as inferior “quackery,” even as they also sought to enhance their prestige by attempting to appropriate popular physiology for their own domain. Health reform thus constituted a challenge, a complement, and a vanguard to the medical establishment of the early nineteenth century.76

Outside of medicine, what historian James Whorton calls “hygienic religion” thrived on the meld of optimistic, romantic, individualistic, and Christian perfectionist orientations of antebellum American culture. Health reformers were drawn from the ranks of the urban white middle class that led the way in creating the values of that culture, and they played an important role in fashioning and propagating its tenets. In particular, it was their use of scientific authority to confirm and extend the tenets of liberalized Christianity, including a code of Christian morality, that made health reform so appealing to middle-class audiences. Like other antebellum reform movements, health reform was firmly rooted in the emerging denominations of evangelical and liberal Protestantism. Despite important differences among them, all the variants of liberalized Christianity positioned themselves against orthodox Calvinist beliefs in original sin, predestination, and a judgmental God, embracing instead the inherent goodness of humankind, the role of free will in achieving salvation, and a vision of God as benevolent Creator. Espousing a “natural theology,” health reformers believed that all of nature, including human nature, revealed God’s will and his beneficent promises to humankind as corporeal, spiritual, and social beings. Health reformers, in concert with their fellow reformers who crusaded for temperance, moral reform, and abolitionism—and with whom they often collaborated—forcefully linked individual improvement and salvation with social regeneration and progress. Their particular contribution to the antebellum reform impulse lay in their insistence that the scientific study and understanding of natural laws were vital for accessing the religious truths and moral principles that were to guide human beings toward perfection, in both this life and the next.77

Whether promoting homeopathy, hydropathy, or vegetarianism, antebellum health reform advocates maintained that physical, spiritual, and social perfection could be achieved from a knowledge of and willingness to apply certain fundamental laws of physiology and health. As Sylvester Graham, the Presbyterian minister turned self-educated physiologist and icon of the health reform movement, explained, “The constitutional nature of man, is established upon principles, which, when strictly obeyed, will always secure his highest good and happiness:—and every disease, and every suffering which human nature bears, result from the violation of the constitutional laws of our nature.”78 In hundreds of instructional tracts and lectures, such as those given under the auspices of the American Physiological Society, founded in Boston in 1837, health reformers explicated what they called the “laws of life” for eager audiences. For all health reform advocates, the effective comprehension and application of the laws of human nature depended on the acceptance of three basic principles. First was the belief in the beneficence, and also the primacy, of nature. God had created nature—human and otherwise—for the purposes of pleasure and enjoyment. Health and vitality were every person’s birthright. Disease resulted when human beings wittingly or unwittingly rejected their divine inheritance through their failure to understand and to obey nature’s precepts for proper diet, dress, exercise, and sexual hygiene. Following from the belief in nature’s absolute goodness was the correlation health reformers drew between physical and moral laws and among the physical, intellectual, and moral aspects of human nature. Reviving and recasting ancient beliefs about the interrelatedness of the body, mind, and soul, health reformers argued that sickness in the body both reflected and produced sickness of the mind and soul and vice versa. Likewise, physical vigor, mental well being, and moral virtue were related as causes and effects of one another. Finally, all versions of the health reform creed preached the importance of self-help and prevention. All individuals were responsible for learning the laws of nature and for adjusting their habits and environment accordingly. Adults were to pursue self-knowledge for themselves and cultivate it in their children. Doctors were to watch for indications of abusive habits in their patients and to educate them about the changes they needed to make before diseases struck.79

If the will of God intended for all human beings to be healthy and happy, his will also meant for them to grow and develop, particularly, though not necessarily exclusively, during the early stages of life. Indeed, health and happiness throughout life depended on following the laws of development during childhood and youth. Therefore, it was not enough for the would-be health conscious to grasp generic laws of human nature. How nature changed over time had to be noticed. “In judging … the propriety, advantages, or evils of exercise, food, and clothing,” asserted Andrew Combe, “we must take into consideration not only the kind of exercise, the kind of food, and the kind of clothing, but also the age, health, and kind of constitution of the individual who uses them, and adapt each to the degree in which it is required.”80 Amariah Brigham, whose Remarks on the Influences of Mental Cultivation and Mental Excitement Upon Health had a major influence on school reform in the 1830s, agreed. All educational programs, he argued, “should be formed, not from a partial view of [the child’s] nature, but from a knowledge of his moral, intellectual, and physical powers, and of their development.”81 According to William Andrus Alcott, the physician and educational reformer who published numerous popular guides on health and character development (and the relationship between them) for young men and women, such powers had merit on their own terms: “I wish to see [youth] so educated that they will not only be what they should be, when they come to adult age, but also what they should be now. They have or should have a character to acquire now; a reputation to secure and maintain now; and a sphere of personal usefulness and happiness to occupy now.82 The first president of the American Physiological Society, Alcott had earlier served as an assistant to William Channing Woodbridge, the editor of the American Annals of Education and the most important popularizer of the ideas of Swiss pedagogue and Rousseau disciple, Johann Pestalozzi, in the United States.83 In keeping with Rousseau and Pestalozzi, Combe, Brigham, and Alcott promoted the notion of childhood and youth as distinct periods of life with particular characteristics, discrete needs, and even unique contributions to make to society.

So, too, did Orson Fowler, hygiene enthusiast and one of the leading American popularizers of the widely embraced science of phrenology. Phrenology was a simpler and more empirical version of the faculty psychology espoused by the Scottish common-sense philosophers, particularly Thomas Reid and Dugald Stewart. Its practitioners enumerated some thirty-seven mental faculties and located each of these in a specific area of the brain. They claimed that they could read the “bumps” on a person’s skull to determine the relative strength and weakness of these traits so as to offer a prescription for directing the individual’s capabilities and character.84 Fowler and others conceived of phrenology as a developmental science. That is, in mapping the site of the range of mental capacities and character traits from combativeness to cautiousness to agreeableness onto different parts of the cerebral cortex, phrenologists also associated their appearance and relative influence with the stages of the life cycle. “Man is not brought forth, like the fabled Minerva from the brain of Jupiter, in the full possession of every physical power and mental faculty,” Fowler explained, “but a helpless infant, yet grows by slow but sure gradation in strength and stature to ultimate maturity.” First to develop, according to phrenologists, were the lower “animal propensities,” the physical drives for food, sex, and survival, whose “organs” were located at the center and back of the skull. Next to develop, and located in the front of the head, were the more advanced powers of reason and perception. Emerging last from the area at the top of the brain were the moral and religious sensibilities, the highest and most important capacities of human nature. The structure of the brain thus provided important clues into the nature of the child at each stage of development and offered an infallible guide for the deployment of children’s education. Alcott’s regard for the “now” notwithstanding, all health reformers joined with phrenologists in casting the process of development in such decidedly teleological and progressive terms. Nature’s universal tendency was toward perfection, proclaimed the ever-optimistic Fowler, who declared improvement to be “the practical watch-word of the age.” Childhood and youth were not, therefore, the most enjoyable stages of life, for happiness increased in direct proportion to the augmentation of intellectual and moral excellence.85 Thus, while promoting the recognition and even the appreciation of the different qualities marking the stages of life, health reformers also nonetheless essentialized a clear hierarchy of value among them.

Health reformers such as Brigham, Alcott, and Fowler understood the capacities, traits, and requirements marking the stages of development to be determined not only by the imperatives of organic growth but also fundamentally assisted by the processes of “cultivation” and education, as well as by the exercise of the individual will. In this way, these developmental thinkers posited something of an interaction between the forces of nature and nurture. This enabled them to counter Locke’s “materialism” by asserting that children possessed a God-given nature, their minds endowed with inherent capacities to know and to love eternal truths. Thus Brigham found it imperative to declare that he did not deny the existence of the “immortal and immaterial mind,” while he also recognized that the brain must be cared for as the organ through which the mind operated.86 At the same time, health reformers’ appreciation of a relationship between nurture and nature also allowed them to check what they perceived to be the excesses of Rousseau’s romantic naturalism with Locke’s conception of the self as educable, autonomous, and self-governing. Indeed, their particular reading of Rousseau, which largely ignored the philosopher’s own focus on education and self-determination in fashioning the individual into a social being, revealed a deep ambivalence toward the tenets of romanticism. On the one hand, they embraced romanticism’s sanguine view of nature, its veneration of the innocent child, its valorization of the unique individual, and its optimistic allegiance to the process of becoming. On the other hand, they remained suspicious of its rejection of civilization, registered a profound distrust of its celebration of the passions, and were reluctant to acknowledge an embodied, emotional self beyond the reaches of rational self-control.87 For Elizabeth Blackwell, as for other health reformers ascribing to an ethos of Christian perfectionism, the ultimate goal of development was divine, not primitively natural, perfection. Thus, Adam and Eve in the garden, not the noble savage in the wilderness, constituted her “ideal of the Human race,” which amounted to a harmonious blend of “beauty and strength, lofty intelligence, powerful action, and purity of soul.”88 Those prescribing more secular developmental outcomes were nonetheless also careful to set limits on the romantic conception of the self. “Rousseau has advocated with much speciousness and sophistry this unthinking, savage, or what he calls the natural condition of our species,” William Sweetser proclaimed. “[However,] [t]he tendency of man is obviously to civilization and mental progress, whence the highest moral and intellectual advancement of which he is capable, is the only natural state that can be predicated of him.”89

Blackwell offered up a systematic explanation for realizing this progressive tendency. In her 1852 work The Laws of Life, she carefully distinguished between “organic life,” in which nature determined both the means and the ends of growth, and “related life,” in which education and individual will played crucial roles in establishing and achieving the higher aims of human existence. After all, she asserted, the aim of development was for the body and the mind not only to become functional but also beautiful. The object of each human being was not merely to live but to live well. “[N]ot our life, but the purpose of our life,” she emphasized, “is under our own control.” Moreover, the balance between organic life and related life shifted with age. In infancy and early childhood, when nature’s intentions were entirely clear, the only responsibility of the child’s caregiver was to furnish the optimum environment for nature to do her work. After age 7, when nature did not speak in such a clear voice, however, favorable conditions for organic growth still had to be provided, as did the noble aims toward which physical, mental, and spiritual development ought to be directed and the training to enable the individual to act independently to achieve those ends.90

In attempting to reconcile the forces of nature and nurture, Blackwell laid out four principles of development for children and their caregivers to follow. First, the “law of exercise” dictated that all the faculties had to be used for them to grow. Second, the “law of order in exercise” deemed the growth of the various faculties to be conditioned by “periodicity,” with different aspects of human nature predominating during different stages of the life cycle. Most important, mothers and educators must understand that the development of the mind was to follow the development of the body. Childhood and youth were thus to be ruled by the “sovereignty of the body,” with the maturity of the mind and soul to be achieved later after physical education was complete. Third, the “law of balance of exercise,” or the “law of compound movement,” qualified the law of periodicity by maintaining that at no stage of life could the body, mind, and soul be wholly separated, that indeed at every age the requirements and possibilities of the whole being had to be attended to. Fourth, the “law of use in exercise” proposed that every being possessed a “special purpose” and a “universal use,” which if conformed to, was guaranteed to lead to the individual’s utmost health and happiness. “The perfection of our human nature, in its double capacity of body and soul, ready for strong and healthy action,” Blackwell summarized, “can only be attained by the gradual unfolding of this nature, according to the Divine order of growth. This order requires that the material development shall precede the spiritual growth; that during youth the mind shall grow through the physical organization; that our education of the mind shall always be subordinate to our education of the body, until the body has completed its growth.”91

Failure to abide by these laws of development resulted in two possible “evils” that threatened the child’s movement toward divine perfection: the problem of imperfect or arrested development and the problem of precocious or early development. Both of these offenses against the developmental process violated what Fowler deemed to be nature’s universal motto of “on time.”92 They also both posited the body as a closed system with a limited amount of energy. In this assumption, health reformers were influenced by the groundbreaking work in the physical sciences, most notably the formulation of the First Law of Thermodynamics, or the law of conservation of energy, which was most cogently articulated by German physicist and physiologist Hermann von Helmholtz in the late 1840s. When applied to organic matter, the law dictated that bodies possessed a finite quantity of “vital force,” or nervous and muscular energy, that had to be carefully apportioned and balanced to achieve growth and maintain health. A body expending energy on one task, such as strenuous physical activity, would not have any energy to spare for another task, such as high-level thinking.93 In keeping with this law, Blackwell explained that the first problem of imperfect development arose when adults neglected to provide children with the proper conditions that would aid appropriate and timely development. A missed opportunity to cultivate the body during childhood and youth inevitably led to perpetual weakness and permanent suffering because energies later in life were to be devoted to other capacities. “Precocity” was even more dangerous. Blackwell warned that the “reserve of vital force” possessed by children and youth was never to be misdirected toward objects of later existence. If mental cultivation were pursued at a moment when the body was nature’s main concern, the child would grow disproportionately in one direction at the expense of the other. Ultimately, both mental and physical development would be arrested, with neither reaching its fullest potential.94

Of particular concern to antebellum health reformers were infant schools, those experimental educational institutions founded in the 1820s and 1830s that presumed to give children an advantage by cultivating their intellects at an early age. The fate of the child prodigies produced by such a regimen was perceived to be dire indeed. “This green-house method of forcing premature development,” Fowler starkly cautioned, “weakens all their powers while alive and hastens death.” Health reformers also decried the teaching methods prevalent in the common schools, which promoted the cultivation of reason too soon through early reading and rote memorization. Rather, children learned best by observing and focusing on areas in which they have a “natural aptitude.” “That mind will likely to attain the greatest perfection,” Sweetser counseled, “whose powers are disclosed gradually, and in due correspondence with the advancement of the other functions of the constitution.”95

Even more detrimental than the menace of intellectual overpressure was young people’s involvement with dangerous temptations that health reformers perceived to be rampant in an industrializing, urbanizing society, such as stimulating food and drink, wearing fashionable dress, reading sensational literature, and socializing with disreputable associates. Exposure to these perils was exacerbated by girls and boys working and attending school, prostitution in urban areas, proximity to working-class and foreign servants in the home, and commercial publishing. According to health reformers, these activities unduly stimulated the emotions, weakened the nervous system, and compromised healthy development of the body and mind. Particularly troublesome was the potential for licentious thoughts and behaviors. Although health reformers held diverse views about sex, most advocated a single standard of morality—self-control in men, purity in women, and innocence in children.96 For Christian physiologists such as Blackwell, Alcott, and Graham, who celebrated the beneficence of nature while linking physical and moral cause and effect, immoral habits did not originate with the body but would surely produce damning somatic results. Thus, the original instincts of the body were not impure but could be made so by unchaste thoughts acted on by a weak or immature will. Bodily indulgences, in turn, led to more immoral thoughts, which led to more indulgences.97 One of the worst offenses in this regard was the practice of masturbation, which, warned health reformers, wasted the energy of the bodily economy and frightfully compromised the developmental process. When untreated, masturbation inevitably led to sickness, to insanity, and to untimely death.98

If the laws of development were rightly followed, however, health reform advocates promised that normal growth would proceed in a consistent, balanced, and favorable fashion. Harmony and ease, not conflict and difficulty, were thus the watchwords of this body of early-nineteenth-century developmental thought. “All nature’s operations are gradual,” Fowler explained. “The sun does not burst suddenly upon our earth, nor go down instantaneously, but rises and sets gradually, besides being preceded and succeeded by slowly-increasing and diminishing twilight.”99 Such analogies from the natural world offered scientific justification for the perspective of one of the most influential texts on religious education of the antebellum period, minister Horace Bushnell’s Christian Nurture, first published in 1847. Bushnell denounced the evangelical practice of sudden, difficult conversion from depravity to goodness in favor of a process of moral training in which “the child is to grow up a Christian, and never know himself as being otherwise.” The traumatic conversion from sin to piety had a long history in Protestantism and was expected to occur during the late teens and early twenties as the moment of harrowing choice between the heedlessness of youth and the righteousness of Christian adulthood. More recently, during the revivals of the Second Great Awakening, conversions occurred earlier, in the early- and mid-teenage years. Bushnell argued that dramatic conversion need not occur at any age, that goodness was the product instead of consistent parental nurturing of the moral sensibilities from infancy onward. Since children were neither inherently depraved nor essentially moral, he maintained, they would certainly struggle between good and evil. The gradual inculcation into goodness would mitigate such conflicts, however, and make their proper resolution habitual, rather than a cause for suffering or concern for parent or child.100 Physicians in the health reform movement affirmed that as the spirit developed, the body and mind would grow. Thus, according to Sweetser, a slow and regular course of development kept the young “along in the beaten track of existence,” away from the “burs and briars of life,” and was most conducive to physical growth, moral improvement, intellectual advancement, and individual happiness.101 Likewise Alcott, who expected the young to strive toward excellence to develop all aspects of their natures to “a high pitch,” nonetheless prescribed that this be done by living according to the motto, “Make haste slowly.”102

To the ideal of harmonious, gradual growth, the “age of puberty” proved both the rule and the exception. In asserting the precept of protracted, steady growth, Blackwell deemed “earthquakes and revolutions” to be “destructive, not creative” forces. Like the other major life transformations that preceded them (birth, first dentition, and second dentition), the changes that occurred at puberty were to be accomplished in a “slow and complete manner,” “the result of long-continued action, working silently but constantly in the right direction.”103 That acknowledged, Blackwell nonetheless also conceded that growth was not an entirely uniform process but was punctuated by “special periods of excitement” when the body and mind were both particularly vulnerable before nature’s requirements and uniquely susceptible to influences from the external environment. Puberty stood out as the most important of the successive periods of rapid growth because the new functions acquired at this age were crucial both for the development of the individual and the progress of the “race.” It was also a critical age because the “evil” of sexual precocity, which bred both physical weakness and moral degeneracy, posed its greatest threat. Even here, though, Blackwell’s tone was ultimately reassuring. Like Rousseau, she recognized that nature potentially bequeathed to youth some measure of “storm and stress” but prolonging “natural” dependence and innocence also provided the antidote to the dangers puberty posed: “[T]he physical education of the body, its perfectly healthy development, delays the period of puberty, and … a true education in which all the bodily powers were strengthened as well as the mental and moral ones, would be the most effectual means of outrooting this evil.” As long as society, parents, and youth did not seek to rush what nature intended to be a period of gradual and benign growth, Blackwell promised, the transformations wrought during puberty would proceed unremarkably and optimally and provide an important source for the vital energy that was essential for the child’s sustained progressive development into rational, autonomous, and moral adulthood.104

In rejecting Locke’s recommendations for early youthful independence in favor of Rousseau’s prolonged dependency, health reformers articulated wider cultural concerns about the unsettled status of youth during the early decades of the nineteenth century and then attempted to temper such anxieties by rendering the traditional social subordination of youth a natural imperative. Their veneration of youthful sexual innocence, which was to extend well through the teenage years, also expressed a resistance to the forces of social change. In this way, health reformers’ developmental principles, especially the dictate of gradual growth, performed a certain amount of conservative cultural work by offering up new explanations of and justifications for the preservation of the age hierarchies of an older order.105 By emphasizing the advantages that accrued to those youths whose growth proceeded in gradual fashion, however, health reformers also affirmed a new model of the self that they recognized was essential for the success of a new social order. Thus, they functioned as what historian Steven Mintz refers to as both “moralizers and modernizers.”106 Health reformers expected that scientifically enlightened mothers and educators would become cognizant of the organic demands for and benefits of children’s gradual unfolding. By following this natural law, such adults would be able to prevent “dangerous” youth from self-destructing or wreaking havoc on the social order as well as shelter “innocent” youth from the worst uncertainties and corrupt influences of the era, while preparing those young people in their charge to benefit from the promises for self-determination and individual advancement held out by such dynamic times. This was a tall order and one replete with many unresolved intellectual and practical contradictions. Not the least of these was to whom the privileges of mature selfhood secured through the development process ultimately belonged. For Locke and Rousseau, these privileges were to be realized exclusively by the elite male, in whom they were tested and incubated during the period of youth. Many health reformers’ views of child development and youth reinforced this expectation explicitly or implicitly; some also, however, opened up the possibility for alternative interpretations.

On the one hand, the laws of development that health reformers espoused were derived in conjunction with the changing imperatives of white middle-class work and family life. These “laws” offered both a description of and a prescription for the sort of growing-up experiences children and youth from this select social group needed to negotiate the process to their advantage. The precept of gradual growth, in particular, supported the expanded educational and training regimens that were increasingly crucial to securing middle-class status for nineteenth-century boys and girls. It also helped to protect the relations within the private, affectionate family from succumbing to the centrifugal forces of individualism.107 On the other hand, health reformers’ particular blend of organicism and environmentalism led them to perceive and insist on the “universality” of developmental laws across class lines. Indeed, for Blackwell, it was the working-class family that best honored the gradual-growth dictate and provided the normative model for others to follow. As evidence, she noted that youth in the laboring classes arrived at puberty later than those in the middle and upper classes. Because, she maintained, honest, moderate farm or factory labor enabled the imperatives of organic development more so than the pernicious indulgences of wealthy urbanites, which inevitably bred the ill moral and physical effects of precocity.108 Others, less sanguine about the inherent healthfulness of working-class life, relied on the law of order in exercise to make the case against child labor. Fowler asserted that because nature demanded that bodies attend almost exclusively to the requirements for rest and exercise throughout the period of their growth, limits had to be placed on children’s and youths’ participation in factory work. Likewise, Combe wanted legislators to be properly educated in the laws of “the constitution of the human body” so that they would see “the utter impossibility of combining [factory labor during childhood and youth] with … that moral and intellectual cultivation which is so imperatively required.”109 Whether health reformers saw working-class children and youth as exemplars of developmental laws or as possessing the same inherent rights as their middle-class counterparts to be shaped by them, their explicit proposition was that the requirements of age were to trump the exigencies of class, even as their class-based assumptions always implicitly informed those age requirements.

Health reformers also grappled with the relationship between the laws of life and the meaning and significance of sexual difference. Women were active creators, avid disseminators, and eager recipients of popular scientific knowledge about health, and they joined their fellow male hygiene enthusiasts in occupying a spectrum of positions regarding views about gender roles and identities.110 Ranging from endorsements of women’s premier moral authority in the home to radical calls for their equal status in society, this spectrum of views was also represented within and across other reform movements, from temperance to abolitionism to women’s rights.111 Many health reformers, men and women alike, offered up and embraced the cause of hygiene as an avenue by which middle-class women could exalt in their new role as primary caretakers of the family. Such a prospect was heartily endorsed by the members of the American Physiological Society—almost one-third of whom were women—in a resolution passed at their second annual meeting: “That woman in her character as wife and mother is only second to the Deity in the influence that she exerts on the physical, the intellectual, and the moral interests of the human race, and that her education should be adapted to qualify her in the highest degree to cherish those interests in the wisest and best manner.”112 All health reformers deemed it the responsibility of women to ensure the physical, mental, and moral health of the men and children in their families. Women needed to be rigorously educated about the laws of physiology and health and trained in the scientific management of the household. Women also needed to attend more conscientiously to their own health, which health reformers and orthodox physicians alike recognized as particularly dire throughout the antebellum period. In varying degrees, health reformers advocated for improving adult middle-class women’s health by adopting exercise regimens, abolishing restrictive and decorative dress, and adhering to sexual restraint within marriage. Whatever their particular perspective on these remedies, all of those who propagated and heeded the call of health reform took part in elevating the status and enhancing the influence of women within the domestic sphere.113

Some health reformers went further than this, more boldly extending the bounds of woman’s sphere into the public realm, although they most often did so by appealing to the logic of sexual difference. Indeed, in writing, in speaking, and in teaching about health, and, for some, in seeking medical training and entrance into the medical profession, women health reformers attested to women’s capacity to claim social roles for themselves beyond the purview of domesticity. Elizabeth Blackwell was one prominent example of those health reformers who sought to endorse women’s domestic role as well as enlarge it. Born in 1821 in England to progressive parents who were active in the temperance, women’s rights, and abolitionist movements, Blackwell received an education equal to her brothers, and, from an early age, she was encouraged to cultivate an independent mind and sense of social responsibility. The Blackwell family moved to the United States in 1832, first to New York, then New Jersey, and finally Cincinnati, where her father unsuccessfully pursued opportunities in the sugar-refining business. Her father’s death in 1838 left the family emotionally bereft and financially destitute. Uninspired by the teaching she took on to help support the family, dreading the prospect of marriage, and increasingly drawn to transcendental and Swedenborgian ideas, Blackwell turned to medicine as a moral calling. Determined to overcome resistance to her pursuit of medical training, and winning some admirers and supporters in the process, Blackwell became the first woman to receive a medical degree in the United States, from Geneva College in New York in 1849. Following further study in England and Paris, she returned to New York hoping to launch her medical career. She again faced obstacles from colleagues and a public hostile to female doctors. Blackwell turned to writing lectures about hygiene, which were published as The Laws of Life in 1852. She went on to become a pioneer advocate for women’s health and medical education in the United States and England, as founder of the New York Infirmary for Women and Children and the Women’s Medical College of the New York Infirmary. When she returned permanently to London in 1869, Blackwell strongly supported establishing similar institutions in the United Kingdom.114 According to Blackwell, it was women’s superior capacity for compassion that made them such ideal healers, both as mothers in the home and as medical professionals. In bringing their heightened moral sense to the task of caring for their own bodies and for those of others, women also nursed the goodness of the soul and thereby held the potential to transform the practice of medicine and bring about the reformation of the larger social world. Blackwell thus went further than the American Physiological Society’s recognition of women’s maternal roles in also singling out for praise those “spiritual mothers of the race” who were “often more truly incarnations of the grand maternal life, than those who are technically mothers in the lower physical sense.”115 For Blackwell, women’s innate moral superiority not only necessitated their greater authority in the domestic sphere but also justified their expanded participation in the medical profession and other social reform projects.

Multifaceted views about gender figured into health reformers’ descriptions and prescriptions for the development of the child. Echoing Locke and Rousseau, health reformers emphasized the similarities, rather than the differences, between the sexes in childhood. In doing so, they challenged characterizations within orthodox medicine that depicted the female body as essentially weak and debilitated.116 Indeed, for many health reformers, the primary motive for alerting mothers and teachers to the imperatives of organic development in the first place was to expose the violations of such principles by girls and those who cared for them, as well as to offer remedies for the current epidemic of ill health among female youth. Girls thus often became the more prominent subjects of developmental thinking among health reformers, a trend that would continue in the history of ideas about child development. Thus, Brigham saw the tendency to cultivate the mind at the expense of the body to be “more particularly true as regards females” and argued that more careful attention had to be paid to the girl’s physical education during childhood. The reason the female body was so often neglected, Combe explained, was that orthodox physicians, teachers, and parents were operating under the mistaken assumption that the laws of development differed according to the sex of the child. “[S]uch is the dominion of prejudice and habit,” he declared, “that, with these results [of girls’ poor health] meeting our observation in every quarter, we continue to make as great a distinction in the physical education of the two sexes in early life, as if they belonged to different orders of beings, and were constructed on such opposite principles that what was to benefit the one must necessarily hurt the other.” “No time is lost [in the girl’s education],” chastised J. Wilson, “in impressing her young mind with the great idea that is to govern her whole after-life-that she is not a boy, and not even a child, but a ‘little woman’—that she must be prim, demure, and cautious in all her movements, ‘like mamma’—that to run and romp is ‘unladylike,’ and to kick up her heels an indelible reproach on her embryo womanhood.” Blackwell’s The Laws of Life, significantly subtitled “with special reference to the physical education of girls,” intended to correct the misconception of “the great idea” by insisting that developmental principles be uniformly applied to girls and boys. For all children, she contended, the only way to achieve the ideal of slow, steady, balanced growth, along with the physical, mental, and moral benefits that flowed from it, was to keep early education focused on physical development and to avoid any sort of excessive study or undue emotional excitement that might compromise the body’s energies and threaten its progression toward its divinely ordained potential and purpose.117

This insistence on the uniform application of the laws of development to girls and boys alike dovetailed with a broader Victorian conception of androgynous childhood, signified in the homogeneously feminized clothing of infants and young children, which both confirmed the dependent status of children as a group and held them up as paragons of sexual innocence. Such a construct linked the establishment of sexual difference with reproductive capacity and (hetero)sexual desire and disassociated all of these from the province of childhood.118 Even so, it also ultimately functioned in conjunction with, rather than in subversion to, the ideology of separate spheres. As historian Karin Calvert finds in her examination of the material culture of early childhood in this period, “An androgynous image of children was acceptable to Victorian parents only so long as the nature and destiny of each sex seemed unalterable and secure … All of this meant that parents were faced with the somewhat conflicting need both to reassure themselves of their children’s asexual innocence and to receive clear indications that any child would be able to fill its proper place in society.”119 In the same way, health reformers’ disavowal of the assumption that girls and boys occupied “different orders of beings” during childhood worked to reinforce, rather than undermine, prevailing notions about the complementary characters and divergent destinies of the male and female sex and about the importance of sexual difference to social order and progress. Indeed, their main point about sexual difference was that it was organically developmental—that it appropriately emerged at a particular moment in the life cycle but was essential, nonetheless, ordained by God and Nature and static in the ideal qualities that were attributed to femininity and masculinity.120

In the health reformers’ conceptions of development, puberty was a particularly important time in the life of girls and boys because the bodily differences that determined their future identities, roles, and responsibilities were manifested. For boys, however, the changes that occurred during puberty established not only a uniform gender identity but also secured a dynamic individual identity. Health reformers’ conceptualizations of the “age of puberty” in the male thus affirmed Locke and Rousseau’s expectation that youth was a critical stage of life for boys that was unique to their development because their passage through it secured for them alone this highest privilege of mature selfhood. Two European medical texts with American editions that extensively described the phenomenon of male puberty and its implications for development were British physician William Acton’s Functions and Disorders of the Reproductive Organs and French physician M. Lallemand’s A Practical Treatise on the Causes, Symptoms and Treatment of Spermatorrhoea. For both Acton and Lallemand, male puberty was marked by the body’s production of semen and the accompanying onset of feelings of sexual desire. Both recognized that the inauguration of such capabilities could not help but influence the boy’s body and mind, although whether for good or for ill was the point of some disagreement between them. “Nothing can prevent the genital organs, at the time of their development, from reacting on the economy and giving rise to new sensations and ideas,” Lallemand maintained. Whereas he conceded that such reactions might have detrimental effects on the boy, Acton instead emphasized that the secretion of semen was the source of an increase in vital force for the boy at puberty, which in turn was responsible for sustaining the progress in physical, mental, and moral development that made possible his realization of mature adulthood. The key to accessing this vitality, he explained, was that excepting those occasional involuntary nocturnal emissions that provided the continent individual with a natural release of plethora, the semen that began to be produced at puberty was to be “reabsorbed into the animal economy,” thereby “[augmenting] in an astonishing degree the corporeal and mental forces.” He went on to describe this process with much awe and enthusiasm: “This new … powerful vital stimulant-animates, warms the whole economy, places it in a state of exaltation and orgasm; renders it in some sort more capable of thinking and acting with ascendance—with a superiority, as we equally observe among animals in the rutting season.” According to Acton, nature intended youth to be marked by “robust health and absence of care.” It was only when boys violated nature’s laws by consciously rejecting incontinence that the period of their youth took on qualities too often accepted as inevitable in the popular imagination: debility, sadness, sensitivity, restlessness, agitation, and apathy. Such a state of “sexual suffering” was, Acton averred, often much exaggerated, “if not invented” for the purpose of justifying immoral behavior.121

Complete continence throughout youth accrued to the individual boy physical strength and mental power because it allowed his vital energy to be used for its proper function of building up his growing frame and cultivating his intellect. In addition, it ensured the robust reproduction of the human race, for only boys whose reproductive organs were fully mature could give birth to healthy offspring. Most important, it propelled the boy’s moral development because resisting temptation strengthened the boy’s moral fiber and made possible his capacity for self-government and moral autonomy later in life. As Acton asserted, the continence that he advised was not “mere ignorance.” Rather, “[t]rue continence is complete control over the passions, exercised by one who knows what they are, and who, but for his steady will, not only could, but would indulge them.” If there was anything difficult or unpleasant about youth, this was surely it, although the rewards of emerging victorious from such a struggle were great indeed. “Grant that continence is a trial, a sore trial, a bitter trial, if you will,” Acton elaborated. “But what is the use or object of a trial but to try, to test, to elicit, strengthen, and brace, whatever of sterling, whatever of valuable, there is in the thing tried?”122 Following Rousseau, Acton posited that out of the boy’s repeated refusal to gratify the selfish sexual passion during youth emerged the reproductive instinct and, finally, the social sense and hereby the highest moral capacities of the man for love, compassion, and justice were born.

American health reformers likewise characterized male pubertal development as naturally entailing a vigorous body, an energetic mind, and a rigorous moral sense (although Graham preferred to focus on the role of the nervous system as the source of such vitality, as opposed to the semen, the importance of which he deemed to be “exceedingly overrated”). The progression through youth into manhood, Samuel Woodward explained, “requires all the energy of the system, greatly increased as it is at this period of life, which, if undisturbed, will bring about a vigorous and healthy condition of both the mental and physical powers.”123 In his Familiar Letters to Young Men, Alcott provided a depiction of male puberty for young men that was both less and more sanguine than this. “There is a period in every young man’s history,” he acknowledged, “when dangers of every kind thicken around him, and seem to threaten inevitable destruction … And such is the violence, to most, of the storms that assail at this critical period, that we are not to wonder if thousands and millions of our race are left to suffer, under their influence and their own experience, a most fatal and terrible shipwreck.” Such dangers were the product of the inharmonious nature of the teenage boy’s body and the society in which he lived. Alcott’s point, though, was both to name the problem and to challenge its inevitability: “If it should be argued that young men, such as I am addressing, are not to be expected to have those well-balanced natures which farther education and a more extended experience would be apt to develope (sic)—that indeed they cannot have them—I should meet the argument by a flat and positive denial. Your character should be as harmonious at four as at sixteen, and at sixteen as at sixty. It should be in harmony at every age of moral accountability, and in all circumstances.” Moreover, insofar as struggles against inharmonious conditions, both internal and external, ensued, it was this stress that made men great.124 According to Alcott, strength and harmony of body, mind, and character were every boy’s birthright; at puberty, the boy was poised and challenged to exercise his manly demeanor to determine his destiny and shape his world—to become “whatever [he] will resolve to be.”125

Physicians in the emerging disciplines of obstetrics and gynecology established a stark contrast between the possibilities for the boy’s physical, mental, and moral progress occasioned by the onset of puberty and the limits placed on the girl’s development by the maturation of the reproductive organs. The girl not only failed to secrete semen, the impetus for the boy’s physical, mental, and moral growth, but the first menstruations actually impeded her development by pitting her body’s extensive demands for vital energy against her educational prospects, placing both of these beyond the control of her individual will. This rendered her “age of puberty” inherently volatile, enervating, passive, and unhealthy and made it impossible for her to achieve the ends of mature adulthood promised by Locke and Rousseau.126 Of far less concern to these doctors than girls and boys being treated too differently during childhood was that pubertal girls and boys were treated too much alike. For them, puberty in the girl was, exclusively, the moment when sexual difference was established and secured and understanding how that happened and what it meant for both individual and society depended on a thorough scientific explanation of the female body and mind during this most critical development. As shown in Chapter 2, conservative doctors’ exclusion of girls from the privileges of youth by providing an account of their developmental differences and deficiencies reached its height during the second half of the nineteenth century. It was intensified by the rising authority and mutual influence of reproductive medicine and evolutionary science and made ever more imperative by the growing incursions by girls and women into the public sphere.

Meanwhile, health reformers provided an important counterpoint to the rising conservative medical discourse about female puberty. They tended to describe girls’ experience with puberty in broadly similar terms to that of boys—as potentially dangerous but not inexorably so. Expressing their confidence that health maintained during childhood and youth would more than suffice to see the girl through the physical and moral demands of this important stage, they were intent on establishing and communicating the organic developmental laws that would enable mothers, teachers, and girls to preserve that health, rather than documenting female developmental failure and lack. Indeed, in his The Young Woman’s Book of Health, Alcott went so far as to apologize for focusing on the “dangers and pitfalls” of this stage in girls’ lives, lest he “either lead them to injudicious dosing and drugging, or make them moping and melancholy.” As with his prescriptions for male development, his larger intention was not to resign girls to succumbing to the inharmonious conditions marking the age of puberty but to encourage them to manage internal and external pressures to emotionally and morally auspicious ends. “Seeing, as [the girl] must by what I have here written, to how great an extent God has placed her happiness and her misery within the range of her own choice,” Alcott proposed, “will she not be led the more earnestly to secure the one and avoid the other?”127

Nonetheless, Alcott and his fellow health reformers both assumed and propagated notions of sexual difference. One way they did this was by deeming the imperative of gradual growth to be even greater for girls than for boys because “natural” female purity was more threatened by the dangers of sexual precocity. Most health reformers were advocates of a single standard of sexual morality and emphasized the importance of chastity for both young men and women. Lust was, however, also grudgingly recognized as an attribute of male nature, and the young man’s effort to control it was one factor that propelled his moral development and provided the foundation for his self-rule and authority over others later in life. In contrast, premature sexual expression by girls served no such purpose, for it violated the essential “passionlessness” of their nature and therefore portended only the direst physical, moral, and social consequences.128 The implications of health reformers’ assumptions of female innocence for understandings of the girl’s moral development were mixed. On the one hand, the girl, in whom the sex instinct was always the reproductive instinct, was perceived as inherently more moral than the boy. Her innate love and compassion served as the vital source of her moral authority in the family and, to some extent, in society. On the other hand, because the girl never had to struggle to control the sex instinct, she was also seen as incapable of achieving the same degree of moral rigor as her brother and therefore unable to exercise the self-control and broad social responsibility expected of and enjoyed by him. Joining other antebellum reformers in accentuating women’s moral superiority and recognizing the importance of “female values” to personal salvation and social improvement, health reformers tended toward the former interpretation of the girl’s moral development. In either case, though, female sexual desire was patently denied and deemed to require more stringent external controls than boys’ sexual desire to ensure the protection of both the girl and the larger community.

Furthermore, although health reformers did not depict female development as a process marked by debility and lack, they did not necessarily expect that the girl would move through it to become “whatever [she] will resolve to be.” Thus, they appealed to Blackwell’s principles of use, order, and balance to claim for female as well as male children the right to a healthy, vigorous, and active development. At the same time, they also construed her fourth law of development, which stated that every being reached its “highest welfare” and greatest happiness by conforming to its “special purpose,” to mean that the final aim of the female child’s growth was the achievement of sexual difference, which entailed, exclusively, the identities and roles of wife and mother realized in the private home. As Brigham explained, it was because girls were endowed with a distinctively sensitive nature that their physical education had to be attended to in equal measure to boys’. Girls who were given the freedom to romp and play and did not adopt the manners and mores of adult women too soon had the best chance that their more active imaginations and more intense emotions would not be “rendered excessive” but instead would give rise to the “finer sensibilities” in which women were superior to men.129 Alcott, who hoped to convince young women that such traits as rationality, originality, perseverance, and decisiveness were not the exclusive province of their brothers, nonetheless saw their main task in life to be putting these qualities to their own distinctive use as a gendered class: “It is quite time that woman should understand her power and her strength, and govern herself accordingly. It is quite time for her to stand upright in her native, heaven-born dignity, and show to the world—and to angels, even, as well as to men—for what woman was made and wherein consists her true excellence.” Fowler put the fine point on Alcott’s vague injunctions. Girls needed strong and fully developed bodies to fulfill their unique and sole destiny as mothers of the human race.130

Elizabeth Blackwell did not disagree. However, she also envisioned a more expansive potential for female development, applying her developmental thinking to enlarging the girl’s capacity as individual and female, albeit still largely within the context of woman’s sphere. For Blackwell, as for her fellow health reformers, gradual growth was perhaps best illustrated by the imperative of protecting the sexual innocence of the girl. She went further than this, however, to argue that girls could and ought to pass through a prolonged stage of youth that not only preserved their purity but also enabled their physical, mental, and moral powers to unfold and be cultivated to their fullest potential. Nature would take care of some of this; the right kind of education and the girl herself would do the rest. Thus, Blackwell declared that the first problem with female education, which was recognized by other health reformers as well, was that the girl’s body was not adequately cultivated during childhood and youth. Overlooked or uncritically accepted by others, however, was the additional problem that the education of the girl’s mind was thought to be complete when she reached 16, just the age, Blackwell proclaimed, at which it should begin. Whereas bodies completed most of their crucial growth in the years up through the end of puberty, the mind had the capacity to continue to develop throughout life. Too often, she lamented, girls spent the years of their young womanhood engaged in the frivolous pursuit of the pleasures of partygoing or novel reading, or, worse still, succumbed to early marriages out of misplaced yearnings for excitement. Instead, the years from 16 to 25 were to be spent in the pursuit of higher mental cultivation that would prepare the girl for the “active duties” of all of the roles of her adult life—wife and mother, most surely, but also “member of society” and “human being.” The laws of development dictated that the growth of the body was to precede the growth of the mind, Blackwell pointedly maintained, not to supersede it.131

Furthermore, per the law of compound movement, some sort of balance among the various realms of development was to be sought at all stages of the girl’s and woman’s life. During girlhood, the emphasis on physical growth would be tempered by some attention to the mind and spirit. During youth, the prominence of mental cultivation would be enabled by maintaining sound bodily health. During adulthood, woman’s focus on the family would be enriched by her ongoing engagement with the wider world in mind, in body, and in spirit.132 Blackwell did not fundamentally challenge the masculine norms of youthful development established by the Enlightenment philosophers and reinforced by nineteenth-century physicians of both conservative and unorthodox ilk. Instead, she maintained that rationality, autonomy, and social responsibility were as much the product of the girl’s development as of the boy’s. Along with her fellow health reformers, she did celebrate the special contribution women made to the family and society through their superior moral capacity. She went further than most, however, in conceiving of the range of possibilities for individual women to express this tendency and in illuminating its harmonious relationship to other developmental imperatives throughout the female life cycle.

As interpreters of Locke and Rousseau in light of the changing experiences of early-nineteenth-century American youth, health reformers made lasting cultural contributions to thinking about child development, not the least of which was to help lay some important groundwork for the modern concept of adolescence. Within the context of the growing authority of medical science, they raised the specter of youth as a problematic stage of life and highlighted the importance of puberty to physical, mental, and moral development. They also attempted to assuage cultural anxieties about the dangers of youth, especially the dangers of puberty, with prescriptions for gradual development that reinforced the immaturity, innocence, and dependent status of youth. At the same time, they promised the young the opportunity to achieve new standards set for the mature self marked by autonomy, rationality, and moral responsibility. That realizing such standards was the product of natural growth, aided both by the right kind of education and individual wherewithal, held out the possibility that the privileges and responsibilities of mature selfhood derived from the process of development might be universally parceled out across lines of class and gender. Girls of the white middle class in particular, who in their social behaviors were at the forefront of carving out a new phase of life for youth, were defended in this discourse and even appealed to as exemplars of certain inviolable developmental laws. They were, however, contained and constrained by it as well, as health reformers also applied the discourse of development to explaining and reenforcing categories of social difference.

Share