publisher colophon
ABSTRACT

This essay provides a rational reconstruction of the author's genetically inscribed inclination to do normative ethics with an historical bent and offers some reflections on the value of historical thinking for bioethics.

When asked why he turned from philosophy to the history of ideas, Isaiah Berlin said that he was worried that if he stayed in philosophy he wouldn't know any more at the end of his life than he had at the beginning. Mark Lilla (2013) makes the point in a somewhat more constructive way: "His [Berlin's] instinct told him that you learn more about an idea as an idea when you know something about its genesis and understand why certain people found it compelling and were spurred to action by it" (xi).

It took me decades longer than the brilliant Oxford don to appreciate the point, but when I did it came as a lightning strike of recognition. My brain seems to have been keyed from the beginning in the syntax of intellectual history. Even though I chose academic training in philosophy, the historical method of approach was always there, an inevitable touchstone to which I was spontaneously attracted. I attribute the fertilization of this deep inclination to the unusual circumstances [End Page 60] of my childhood. My house was awash in ideas. My father, the psychiatrist J. L. Moreno, presided over a small mental hospital and therapy training center (a "world center" he called it), in which he promulgated his pioneering work on psychodrama, group psychotherapy, and sociometrics. His decades-old battles with psychoanalysis on one side and his imitators on the other not only seemed like a life-and-death struggle; for him it was.

J. L. was a self-described megalomaniac, the biggest character in my life then and still. He was eccentric, combative, colorful, charismatic, paranoid, original, and inspired, a tireless promoter of his ideas until the day of his death at age 85, when I was 22. By then I'd become a kind of participant-observer in the psychiatry and social science of the postwar world, having attended dozens of professional meetings, and not only in the United States. Since I was five years old I had also traveled with my parents to conferences and group therapy workshops in Western and Eastern Europe and the Soviet Union. Somewhere along the line I "got" that I was witnessing the unfolding of political and intellectual history, as well as some pop culture milestones. At the beginning of my book Mind Wars (2012), on neuroscience and national security, I recalled a softball game with an LSD therapy group that rented my father's 20-acre sanitarium in 1962.

I was a good student everywhere but in the classroom—or, more precisely, on exams. I read voraciously, especially science fiction and the varied collection in my father's library, with an emphasis on psychology and history. Not only did I much prefer dabbling in whatever caught my eye on his shelves than those boring high-school textbooks, but what school could possibly have been more interesting than the life I was living? By the time I went to college I had traveled the world, treated like psychotherapy royalty and surrounded by passionately committed mental health professionals, truth seekers, and seriously ill mental patients—not always distinguishable. But it came too easily. My parents' world allowed me to build an intellectual mansion without a foundation. I knew that my lack of interest in academic performance wasn't serving me well (it didn't help that my parents weren't too concerned about incidentals like good grades either), but only when I stumbled into an introductory philosophy course did I catch a spark.

Naturalism

Hofstra's Evelyn Shirk was one of the few women in philosophy in the 1960s, and one of the still fewer full professors. Then in her 50s, she was a product of the Columbia University department. Her approach to normative issues was firmly in the naturalist tradition of John Dewey, for whom moral values are emergent properties of natural forces. Her own contribution was her book, The Ethical Dimension (1965), in which she emphasized the importance of context. I remember especially an example she liked to tell about her and her husband's summer home in Vermont (he was Justus Buchler, a distinguished speculative metaphysician [End Page 61] at Columbia and Stony Brook). There was a lake that was taken for granted until various environmental and development problems threatened it. Suddenly the whole community mobilized to save the lake. Now the people in the town discovered that they were possessed of a very practical moral and aesthetic value about that body of water, one that they had barely noticed before the crisis.

It was a modest point about the way that values emerge from experience, but it obviously made an impression on me. And this was 1970, just around the first Earth Day but years before environmental ethics became an established field. Shirk and I became close, and she suggested that for graduate school I might pursue one of the new law and society programs. I had never heard of bioethics, a field that was only being born at the time, and the moment passed. But mainly I enjoyed having an anchor in a discipline that I enjoyed and was reasonably good at. Philosophy gave me a center and identity that provided a structure for my broad and somewhat scattered range of interests.

Naturalism showed up at another critical point a few years later, when I took a seminar with John J. McDermott, then at the City University of New York and later for many years at Texas A&M. He introduced me to Dewey, William James, Charles S. Peirce, and George Herbert Mead as classical American philosophers and helped me understand the difference between pragmatism and naturalism. Peirce in particular called pragmatism "a way to make our ideas clear." It's a method of distinguishing the meaning of one word, sentence, or hypothesis from another, "a new name for some old ways of thinking." By contrast, Dewey's naturalism is a worldview. Some would say it is a type of process philosophy often associated with people like Henri Bergson, A. N. Whitehead, and Charles Hartshorne, though for the young Dewey the main influence was Hegel. Naturalists believe that reality is dynamic, a scene of constant change, rather than the static cosmos of a Platonist. Because I appreciated the difference between pragmatism and naturalism early on, I was inoculated from the flurry of interest in "pragmatic bioethics" among some of my most admired colleagues a few years ago. For example, my essay in an anthology on pragmatism in bioethics focused on ethical naturalism as a way of understanding bioethics and other so-called applied ethics fields (Moreno 1999).

Intellectually speaking, my naturalist sympathies made the move from philosophy to bioethics, well, natural. Then there were practical considerations. As an untenured assistant professor at George Washington University in 1979, I was happy to accept any assignment that might make it harder to let me go, including participation in an "experimental" bioethics course in the medical school. And, inspired by Dewey's example, I also wanted more of a "public" career than that ordinarily available to the typical academic philosopher in the late 20th century. Later my professional ambitions became the focus of a not entirely flattering New Republic story about the burgeoning bioethics field called "When We Were Philosopher Kings" (Shalit 1997). After a year at the Hastings Center, I returned to GW and a part-time role as "philosopher-in-residence" at the Children's National [End Page 62] Medical Center. (I made up the title; it's still my favorite.) When SUNY Down-state Medical School invited me to found a Division of Humanities in Medicine in 1988, I jumped at the chance.

Crisis

The nightmarish quality of the early HIV/AIDS era, before the advent of reliable therapies, left little emotional space for ruminating about history. In retrospect it was mind-numbing. At Children's Hospital in 1986, we speculated about what it would be like for HIV-positive babies to survive to adolescence and sexual maturity. When I arrived at Downstate, virtually all the patients in internal medicine beds at Kings County Medical Center were there for HIV or diabetes complications. The obstetrician Howard Minkoff and I wrote papers about HIV and pregnancy. Colleagues were dying, including some of the leaders of the fight against AIDS. At Downstate, near-panic ensued when a medical resident suffered a needle-stick. AZT was prescribed, though its risks and benefits were uncertain. A resident in surgery, also exposed, was reassigned to research. Medical staff were often fearful of drug-addled patients, especially in the emergency rooms. In New York State, controversy about the ethics of testing babies for HIV antibodies turned into a nasty political squabble. Along with bioethicists at Montefiore Medical Center, we developed a protocol for entering HIV-positive pregnant women into clinical trials without the involvement of fathers, who were often absent or abusive. I routinely found myself explaining American history and culture to residents who had been physicians in places like Egypt. They were mystified not only by their patients' culture (largely African American), but also about the ethical and legal requirements for something called autonomy. The role called for anthropology; bioethics and history, not so much.

There was a waterfall of sadness. One afternoon I moderated an interdisciplinary case conference about a seven-year-old boy at Kings County who was dying of AIDS complications. His whole life had been spent being shunted from one foster home to another, where he had seen several of his foster siblings dying. The question was how aggressive to be in his last hours. He didn't seem to want to be treated. Even a staff so accustomed to suffering was paralyzed with indecision. Finally, a consensus against aggressive treatment was reached.

My experience in clinical ethics at the height of the AIDS era led me to think about the nature of moral consensus. This, again, was an easy turn for a philosophical naturalist, as I tend to think that moral deliberation is and ought to be contextual and social, and therefore implicitly historical. More generally, I came to be interested in the way that modern bioethics emerged in the 1960s, a subject that continues to fascinate me. Both of these interests played a role in my book about bioethics and moral consensus, Deciding Together (1995). I was impressed by Allen Buchanan and Dan Brock's important book about surrogate decision making, [End Page 63] Deciding for Others (1989), and by Martin Benjamin's Splitting the Difference (1990) on moral compromise. Neither seemed to me to capture the social and consensual nature of ethically laden decisions. In writing Deciding Together, I came to appreciate that the hard challenge for a naturalistic moral consensus theory is hedging against a conclusion that, however tentative, excludes and exploits minorities. My answer was a set of limiting conditions on consensus processes (including and respecting alternative points of view) that themselves are products of experience. It was a Rawlsian move that was the best I could do. If nothing else, that project forced me to come to terms with my limitations as a philosopher.

The Advisory Committee on Human Radiation Experiments

Naturalism has proven to be a durable perspective for me, especially as I came to appreciate the dearth of history in bioethics scholarship and the way that deep historical knowledge can illuminate the meanings of terms and concepts that we take for granted. In that respect, when Ruth Faden asked me to join the staff of the Advisory Committee on Human Radiation Experiments (ACHRE) in 1994, I was poised to make a leap into what was for me a radically new area. ACHRE was charged by President Bill Clinton with determining what ionizing radiation experiments had been done involving American citizens from 1944 to the early 1970s, what the ethics policies were at the time, and how they compare with our rules now. My job was to map the development of the ethics standards, especially in the Department of Defense, the Department of Energy, and their predecessors.

Assisted by Nancy King, Ruth and Tom Beauchamp had written the most authoritative history of informed consent (Faden and Beauchamp 1986), which proved to be a fundamental source for us, but we realized that no previous work had captured the national security angle. That would require digging into voluminous government archives, a formidable task that would require considerably more resources than an ethics project can normally command. The challenge was exacerbated by the fact that materials were not accessioned with an eye to retrieval for purposes of ethics history, so the cooperation of astute archivists was crucial. And then there was the still greater obstacle that many documents were in classified collections. Under ordinary circumstances, Freedom of Information Act inquiries could take months or years, and even then without prior access to the files one would often be flying blind.

None of that was an obstacle for the Advisory Committee. With its presidential mandate came a significant budget, the clout required to open federal agency doors, and, most important from the historical standpoint, the authority for members and staff to obtain security clearances as needed and to request de-classification of promising documents. Many committee members preferred not to have a clearance so that they would not feel constrained to remember what [End Page 64] was and what was not declassified during public sessions. That left the job to several staff members who were in any case the ones mainly responsible for poring through archives. There was a satisfying historical symmetry in the Department of Energy ID card I was given as an ACHRE staffer, one marked with a "Q." This was a clearance code that dated back to the post–World War II Atomic Energy Commission, equivalent to "top secret" in other agencies. About the only legal authority the Advisory Committee lacked was the ability to subpoena witnesses. However, this was not a legal proceeding. At the time there were a number of lawsuits working their way through the courts, some of which benefitted from ACHRE research.

Here I pause to underline the invaluable role of the federal archivists in guiding our desperately time-limited forays into the boxes; more than one historian has complained to me about the inadequate finding aids for U.S. government records. As well, the trained historians with whom I've worked are relentless in their approach to piecing together elusive stories from mountains of seemingly disconnected records. Observing them piece together a story, it has seemed to me that they possess a special kind of intuition for the telling clue. Any historian would salivate at the daily scene at the ACHRE offices in downtown Washington after the staff got into the various federal archives. Nearly every day, often several times a day, dozens of reams of paper were wheeled past my office, copies of documents that were previously inaccessible if anyone even knew about them. In fact, the situation was overwhelming. Often I had to decide between continuing to keep writing a briefing paper for the committee and wandering down the hall to start digging into the contents of a new box. A single document could upend some theory or other that we had developed about the origins and significance of some policy or case.

After a remarkably intensive 18 months, the Advisory Committee published a final report, a combination of history and ethics analysis that resulted in a set of findings and recommendations, including recommended apologies and compensation from the federal government, a result that went beyond the president's charge and apparently did not please the White House staff. This did not mollify critics who were angry that the committee did not single out individuals for criticism, especially government officials, but the members decided that their job was not to assess individual blame, especially given that the chains of evidence about culpability can be difficult to establish decades after the fact. Nor did the committee have the ability to do such an investigation again, as it lacked subpoena power. Even the decision to find the federal government responsible for certain moral wrongs was fraught, considering that retrospective moral judgment can be an easy target for charges of second-guessing. This problem was itself the subject of much discussion by members and staff. However, once the evidence showed that even by the standards of the day certain moral norms were violated, readers could draw their own conclusions about moral blameworthiness. [End Page 65]

Besides vastly enriching my understanding of the origins (and, I'd argue, the very meaning) of key terms and concepts like consent, through my work for ACHRE I was struck that no one had provided a comprehensive historical account of the role of human experiments in advancing the purposes of national security. It was one of those rare experiences where, the scales having fallen from my eyes, the connections seemed so obvious and essential they couldn't be ignored. As the policy angle for human radiation experiments was entangled with the records of policies on biological and chemical weapons, I decided to tell that story in a way that explained and extended the ACHRE findings. The title of the book that resulted is Undue Risk, a term that has a prominent role in the Nuremberg Code. Besides being able to draw upon the now-public documents from my experience on the ACHRE staff, I dug into political history, military history, and the history of medicine, as well as conducting a number of interviews.

One theme that comes up repeatedly in Undue Risk is the history of using military personnel in medical experiments. On the morning of a meeting with Defense Department officials, my research for ACHRE on this subject was the basis of a Washington Post story about the seeming inconsistencies between the policies and the practices during the Cold War. Apparently the Post story got plenty of attention across the Potomac as soon as it appeared. As we were waiting outside a Pentagon conference room for the meeting to begin, an officer took me to task for making it seem as though the parents of American young men and women couldn't trust their commanders not to make them human guinea pigs, even though the events in question had taken place 40 years before. I thought it would be important to get the perspective of modern-day medical volunteers. That led to a day at the U.S. Army Medical Research Institute for Infectious Diseases at Fort Detrick in Frederick, Maryland. I was given permission to interview a number of medics who were mainly lab assistants but could also be recruited for infectious disease experiments. With the exception of studies involving blood draws, for which there was a modest payment, other than satisfying intellectual curiosity and meeting a new challenge, there were no benefits to volunteering. The medics also had representatives on their institutional review board (IRB).

For me, one of the principle revelations of the national security lens has been the key role of the military and other entities responsible for weapons development in the evolution of policies related to experiment volunteers. For example, as both the ACHRE final report and Undue Risk explain, the term "informed consent" first appeared in writing in an Atomic Energy Commission letter in 1947, and the first agency to adopt the Nuremberg Code was the Department of Defense in 1953. By contrast, ACHRE member Jay Katz observed that the American academic community saw no reason to adopt a code written with Nazis in mind. (Today the Department of Defense and the Central Intelligence Agency are both governed by the federal Common Rule for human experiments, as are 16 other federal agencies.) In one sense those stories turn customary expectations [End Page 66] on their heads, for one might have thought that civilian agencies would be the pioneers in this field. Yet a closer look at the contexts of these milestones shows that their precise significance isn't so clear now, even to those responsible for promulgating them at the time. There are important lessons here, both for the way that bureaucracies operate and for the limited understanding that human beings have of their own motivations, interests, and intentions.

Anthrax

My interest in history gave me a chance to be a miniscule part of it just after Undue Risk was published. In the later 1990s, concerns about biological weapons were ramping up. The outgoing Clinton administration decided to put greater emphasis on protecting the public health in the event of an attack by one of a short list of agents, including anthrax. In July 2000, I was asked to serve as the "ethicist" in a one-day meeting of various interested parties at the Food and Drug Administration's (FDA) headquarters in suburban Washington. The question was whether this advisory panel thought that a powerful antibiotic marketed by Bayer as Ciprofloxacin should be approved for human use following exposure to inhalational anthrax. Besides the fact that Cipro is associated with rare but unpleasant side effects, there was the problem that for ethical reasons it couldn't be tested in humans, as that would require deliberate exposure of human subjects to a potentially lethal bacterium. But after several presentations that described clinical experience with Cipro and tests of the drug in anthrax-exposed primates, it became clear that under extreme circumstances and with few other good options, Cipro was a reasonable choice. The FDA's decision to approve Cipro for inhalational anthrax set a little-known precedent for some drugs to be approved for human use under what is known as the "animal rule."

So much new information was coming my way that day (including the correct pronunciation of the tongue twister Ciprofloxacin), that I don't remember much about the proceedings. But I do recall being struck that even after several hours, none of the speakers had mentioned how anthrax could be deliberately and diabolically disseminated so that it would be inhaled, as the usual route of transmission is through the skin from handling animal hides (hence its nickname, the "woolhandler's disease"). The only scenario was offered in passing by an expert from Louisiana State University vet school, who theorized that someone could get on the top of a tall building and release some powdered anthrax. Just 13 months later, five people died and 17 were infected by the "anthrax letters" sent through the U.S. mails containing finely milled white powder. The targets included congressional offices. Fearing that they had been exposed, many Capitol Hill staff members were subsequently prescribed Cipro.

The "Amerithrax" attacks began just a week after 9/11. Biological and chemical weapons had been a modest part of the story I told in Undue Risk, mainly in [End Page 67] terms of the U.S. human experiments policies during the 1950, which typically brought ABC warfare (atomic, biological, and chemical) under the same heading. As the anthrax episode was unfolding, I noticed that, like the radiation experiments, there was an opportunity to bring biosecurity within the ambit of bioethics. I charted a prospective table of contents for a collection of papers on bioethics and biological weapons. Unlike the radiation experiments that unfolded in the modern world, in this case the historical angle was especially salient, because of the origins of biological warfare in the ancient world and the fascination with them in classical sources from the Old Testament to the Greek tragedians. The anthology, published in 2004 under the title In the Wake of Terror, consolidated my reputation as a bioethicist who was interested in national security issues. But the truth is that the substance of this work was more historical than ethical, and I was coming to think of myself as at least as much an historian as a normative philosopher—which suited me just fine. The case material provided by the histories of these technologies and the policy responses to them was so rich in itself, and far more compelling for most audiences than ethical theory removed from the cases, that I found the pull toward history irresistible.

Neurons

With the bioethics and biosecurity project well under way, I figured I had pretty much exhausted the national security theme. Then in 2002 I was asked to be a member of the opening panel at what is generally considered to be the first neuroethics conference. "Mapping the Field" was sponsored by the Dana Foundation and held at the Presidio in San Francisco, parkland that is itself an historically significant Army base (Moreno 2002). I can't give myself high marks for my contribution to that panel. The trouble was that I had given neuroscience hardly any thought, but as was becoming my pattern I defaulted to an historical case and its implications, in this instance the now-famous incident in which a 19th-century railroad worker named Phineas Gage suffered a spectacular injury when an iron bar passed through his head. He survived without his frontal lobe, though by at least some accounts not without a significant personality change. There was certainly plenty of philosophical material to chew on here, and chew I did, but it didn't make for much of a meal.

For the rest of the day I listened to speakers who taught me a lot about neuroscience, which considering where I started isn't saying much. But I was dissatisfied by my performance and had a vague feeling that I was missing something that I really could contribute, if only I could figure out what that was. In retrospect, it amuses me that we were meeting at a site that was so important to military history, yet I was too dense to make a connection to the topic. Finally, near the end of the day, during an open discussion period for the approximately 150 attendees, I commented that no one had mentioned the military angle. How could all this [End Page 68] cool emerging neuroscience be relevant to national security and defense? I think it's fair to say that I got mostly blank stares except for one older British scientist (I don't remember who it was), who I think flashed me a wry smile.

That nonverbal communication, if that's what it was, was precious little encouragement, but it was enough to keep me thinking about the question. A few weeks later the editor of the Dana Foundation's journal Cerebrum asked me if I wanted to propose an article on neuroethics for the journal (Moreno 2004a). Since the San Francisco conference, I had been noodling for a while over a title that I thought would be a lot of fun to use sometime: "DARPA on Your Mind." Of course, DARPA stands for the Defense Advanced Research Project Agency, the Pentagon's cutting-edge science and technology outfit, most famous for developing the internet. What kinds of neurotechnologies might DARPA be interested in, and what ethical issues might arise? It took me no more than three days to write the article, partly because some of the technologies and policies in Undue Risk and In the Wake of Terror touched on the topic.

A few weeks after I had sent that item to the editor, she conveyed a message to me from the foundation's president, William Safire, the former speechwriter for Richard Nixon and Spiro Agnew and the weekly contributor of a New York Times Magazine column called "On Language." Could I write a book on the topic of neuroscience and national security? I said I'd be very interested, but I needed some time to make sure it could be done. After all, surely any material along those lines would be classified and unavailable to the public. So I resorted to what was in 2003 a novel way to answer such a question: I Googled "DARPA" and "neuroscience." I got hundreds of thousands of hits, most of them unhelpful but enough to give me confidence. One set of results involved published papers in journals like Nature and Science, and in more lay journals like Scientific American, that described neuroscience experiments and mentioned the funding source. Another set of results included lists of DARPA contracts and requests for proposals. It didn't take much imagination to connect the dots. These results allowed me to infer questions that DARPA was actively interested in pursuing. I knew then I could write a book.

But a book about what? What attracted me to the topic was the paradox that it was both novel and also not novel at all. The idea of using modern neuroscience to manipulate the mind/brain was compelling, but the idea that minds could be manipulated for aggressive purposes isn't new. The idea is pervasive both in military history and in popular culture, from Thucydides to brainwashing. So the challenge was to bring the two narrative lines together. After about a year of writing, I realized that I was taking the roles of historian, tech geek, and science journalist, with the ethical issues largely implicit. When the time came to write a subtitle for Mind Wars, Safire vetoed including the word ethics. When you write a book about the brain and the military, he said, people will understand it involves ethics. Of course he was right. [End Page 69]

During the first year of writing, I kept the project pretty much to myself. There was so much low-hanging fruit in the open literature I was afraid someone would beat me to it. When the first edition of Mind Wars was published in 2006, my next worry was that it would be dismissed as evidence of paranoia, but I found that it was easier to protect myself from such charges as an historian than as an ethicist. In the latter role I would mostly have proposed and described principles, such as self-determination and privacy and their application to the imposition of new neurotechnologies, leaving others to speculate about whether I thought the new technologies present a substantial threat. But as an historian I could explain how the science, the ethics, and the security challenges have all developed to bring us to this point. An historical perspective also enabled me to make some observations that were more provocative than ethical judgments per se, the sort of meaty analyses common to historians of science. For example, I could note that national security authorities' interest in various neurotechnologies is subject to what might be called fashions that roughly track the surrounding culture: in the 1950s and 1960s, it was psychoactive drugs; in the 1970s and 1980s, there was a minor but well-documented interest in extrasensory phenomena like telekinesis; and today neuroimaging is a matter of substantial investment by national security science agencies like DARPA. In the history of neuroscience there are also examples of waxing and waning interest, like animal experiments involving electrical pulses to the brain in the late 1950s and early 1960s, an approach to human mind modification that reemerged in the 1980s with a new technology called transcranial magnetic stimulation.

Because I have spent so much time thinking about science and national security, I have also run across the hot-button issue of the proper role of scientists in military research and development. I ended Mind Wars by expressing a common progressive view about the role of civilian science in the military, that new neuro-scientific knowledge in the service of organized violence should also inform opportunities for peaceful conflict resolution. Recent scholarship about the role of 1960s social science research on Communist insurgencies in places like Vietnam has made me less sanguine about this conclusion. Social scientists who worked at defense-oriented think tanks believed that rather than militarizing social science, their work could "civilianize" the military by making its approach to counterinsurgency rely more on psychology and sociology than force of arms. Not only did those efforts fail, but partly as a result of campus anti-war protests the Department of Defense withdrew from academia and established its own in-house social science expertise (Rohde 2013). One of the drawbacks of persistent historical study is that counterexamples keep cropping up, but it also tends to make us smarter. [End Page 70]

What Am I?

I return to the point that Isaiah Berlin seems to have had in mind more generally about the contribution of the history to philosophical discussion, that it can enlighten, enrich our understanding of ideas that can too easily be ripped out of the context that gives them meaning. That fairly modest claim is often confused with a stronger one, that the history "explains" the ethical standard we have now, or the still stronger claim that the history "justifies" the ethical standard we have now. Positivist philosophers of science used to distinguish between the context of discovery (how one comes up with an idea) and the context of justification (how one marshals evidence and arguments on its behalf). Although this point cuts against my grain as a naturalist, it is worth remembering. Nonetheless, those philosophers also allowed that there can be distinctions without differences, so normative explanation and justification can co-occur in the same moral analysis. That's how an historically informed bioethical analysis can work, without fear of self-contradiction.

From the standpoint of professional categories, writing as an historian who is usually identified as a bioethicist has its hazards. The culture of bioethics generally sets an expectation for normative arguments in a philosophical or even legal mode. Historians make arguments, but they tend to be more elliptical and rarely normative. Sometimes I find myself drawing normative conclusions (such as the warrant for certain annoying requirements for the design of clinical trials), based on an account intended to illuminate the events and reasoning that got us here. Because this approach puts all historical arguments about current circumstances at risk of committing the genetic fallacy—the notion that understanding the origins of an idea is tantamount to its refutation or justification—it's necessary to make the case that the story is relevant in spite of the fact that at least some important conditions have changed. For example, the new "basket studies" of anti-cancer drugs that target specific genetic mutations in tumors rather than types of cancer may engender ethical issues that are not illuminated by much of the history of informed consent for oncology experiments, due to the radically novel trial design.

On the other hand, one of the most powerful normative tools the historian-ethicist brings to the table is sensitivity to the value of historically grounded judgments and some experience in reconstructing prior conditions concerning particular cases. What is sometimes called retrospective moral judgment can be a stumbling block or, if carefully executed, a slam dunk. In my view, that was the case when the Presidential Commission for the Study of Bioethical Issues reviewed the sexually transmissible disease experiments in Guatemala in 1946–1948, events that were only discovered after President Obama took office. At the invitation of Guatemalan officials, American government scientists were given the opportunity to determine the efficacy of penicillin in a challenge study. The then still-new antibiotic was administered after intentional exposure to a sexually transmissible disease through intercourse with infected sex workers, direct application to genitalia, [End Page 71] and even inoculation into the cerebral spinal fluid to cause neurosyphilis. Apart from the apparent lack of consent, was such a study ethical in 1946? There are obvious perils in applying modern standards to such a sensitive case. This was the same problem faced by the radiation experiments advisory committee, and the solution was quite similar: based on contemporary statements, both public and private, both commissions concluded that those responsible for the experiments knew that they did not pass ethical muster at the time. One particularly damning item was a New York Times article in which such experiments on humans were said to be "ethically impossible," a view attributed to one of the scientists involved in the Guatemala study. That phrase became the title of the Presidential Commission's report (2011).

In 1954, Joseph Fletcher published his landmark Morals and Medicine, often celebrated as the first recognizably modern work of bioethics. Fletcher argued on behalf of positions that would be regarded by many as radical even today, such as active euthanasia, and some that would be regarded as quite regressive, like involuntary sterilization. As a naturalist and an historian, I'd say that before we can assess how Fletcher's work can inform our own, we need to make sense of him as a person of his time. The field should have learned in the past 60 years that it's time to add "history" to "morals and medicine." At the risk of asserting a counter-factual, I think Fletcher would agree.

Jonathan D. Moreno
Perelman School of Medicine, University of Pennsylvania, Blockley Hall, 14th Floor, Philadelphia, PA 19104.
morenojd@mail.med.upenn.edu

References

Benjamin, M. 1990. Splitting the Difference: Compromise and Integrity in Ethics and Politics. Lawrence: University Press of Kansas.
Buchanan, A. E., and D. W. Brock. 1989. Deciding for Others: The Ethics of Surrogate Decision Making. Cambridge: Cambridge University Press.
Faden, R., and T. L. Beauchamp. 1986. A History and Theory of Informed Consent. New York: Oxford University Press.
Fletcher, J. 1954. Morals and Medicine: The Moral Problems of the Patient's Right to Know the Truth, Contraception, Artificial Insemination, Sterilization, Euthanasia. Princeton: Princeton University Press.
Lilla, M. 2013. Foreword. Against the Current: Essays in the History of Ideas. Princeton: Prince ton University Press.
Moreno, J. D. 1995. Deciding Together: Bioethics and Moral Consensus. New York: Oxford University Press.
Moreno, J. D. 1999. Undue Risk: Secret State Experiments on Humans. New York: W. H. Freeman.
Moreno, J. D. 2002. "Gaging Ethics." In Neuroethics: Mapping the Field, ed. S. J. Marcus. Washington, DC: Dana Press.
Moreno J. D. 2004a. "DARPA on Your Mind." Cerebrum 6 (3): 91–99.
Moreno J. D. 2004b. In the Wake of Terror: Medicine and Morality in a Time of Crisis. Cambridge: MIT Press. [End Page 72]
Moreno, J. D. 2012. Mind Wars: Brain Science and the Military in the 21st Century. Rev. ed. New York: Bellevue Literary Press.
Presidential Commission for the Study of Bioethical Issues. 2011. "Ethically Impossible": STD Research in Guatemala from 1946 to 1948. Washington, DC: Presidential Commission for the Study of Bioethical Issues. http://bioethics.gov/sites/default/files/Ethically-Impossible_PCSBI.pdf.
Rohde, J. 2013. Armed with Expertise: The Militarization of American Social Research During the Cold War. Ithaca: Cornell University Press.
Shalit, R. 1997. "When We Were Philosopher-Kings." New Republic, April 28.
Shirk, E. 1965. The Ethical Dimension. New York: Appleton-Century-Crofts. [End Page 73]

Additional Information

ISSN
1529-8795
Print ISSN
0031-5982
Pages
60-73
Launched on MUSE
2017-09-07
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.