A Tale of Two DisciplinesLaw and Bioethics
Fascination with In re Quinlan, the first high-profile right-to-die case in the United States, led the author to law school. By the time she received her law degree, bioethics was emerging as a field of study, and law and bioethics became her field. The mission of legal education is to teach students to "think like a lawyer," which can be a productive way to approach issues in many fields, including bioethics. Legal education can also teach individuals to respect people whose views on bioethics issues differ from their own. This essay describes three areas in which legal training influenced the author's work in bioethics: treatment decisions, research misconduct, and stem cell research.
Karen Quinlan played a big part in my decision to become a lawyer. When this nation's first high-profile right-to-die case was litigated in the 1970s, I was a college graduate who wasn't sure what to do next. I had majored in psychology and sociology and had thought about graduate study in one of those fields. But In re Quinlan (355 A.2d 647 (N.J.1976)) pointed me in a different direction.
Karen Quinlan was a young woman who had temporarily stopped breathing after taking tranquilizers and drinking alcohol at a party. Oxygen deprivation left her in a persistent vegetative state, and doctors believed she needed a respirator [End Page 47] and other intensive care measures to survive. Over time, her family came to see these measures as invasive and futile. But when they asked for the respirator to be removed, the doctors, hospital, and New Jersey prosecutor opposed the request.
Constantly in the headlines as it moved through the New Jersey trial and appellate courts, Quinlan was impossible to ignore. I was just one of thousands following the story. Quinlan and her family became familiar figures, and people across the country knew about the circumstances that had put her in the intensive care unit. The case was a vivid public announcement of the unprecedented control modern medicine could exercise over life and death.
The case drew my attention for another reason. I wasn't conscious of it at the time, but personal experience laid the groundwork for my fascination. My father died of cancer when he was 39. I was 12 at the time; my brothers were even younger. Heeding the conventional wisdom of that era, my mother tried to shield us from the fact that he was dying. We knew something awful was happening, though, and eventually, it did. Yet even after his death, no one talked to us about the nature of his illness and why he couldn't be successfully treated. It was years before I learned that it was melanoma that had ended his life.
By the 1970s, I was old enough to investigate the terrifying world of life-threatening illness myself. Quinlan was the start of my education. Besides reading everything I could about the case, I attended a public event where a panel of experts discussed Quinlan. The panel included a couple of law professors who talked about topics like the legal definition of death and the relevance of religious beliefs to medical and legal decision-making.
I was intrigued by their remarks. I was also surprised that they were talking about topics like these. I hadn't realized that law could be so interesting. Before this, I had never thought about becoming a lawyer. Because of Quinlan, I learned about the diversity of the law. I learned that law was relevant not only to business matters I regarded as dry and dull, but also to matters I cared about.
I decided to apply to law school, and in 1976 I began my legal education. At that time, law schools didn't offer many courses in law and medicine. Fortunately for me, Harvard Law School's faculty included Alan Stone, a psychiatrist who taught courses related to his field. He was kind and supportive of my interest in law and medicine, becoming a mentor of sorts. I loved the classes he taught and took as many as I could.
Stone's courses supplied welcome relief from required courses like contracts and civil procedure. Yet being exposed to the conventional legal curriculum wasn't all bad. I developed interests in criminal law, as well as constitutional and administrative law. Even courses in "boring" areas like property and bankruptcy presented policy issues that captured my attention. I wouldn't have said it then, but I now believe that required courses in higher education have a role. In my years of teaching, I've watched many law students develop professional interests they would not have discovered in the absence of those courses. [End Page 48]
What I found most valuable was the analytic approach I learned in law school. Legal education succeeds when graduates leave with certain skills. Good lawyers know how to make the best case possible for their position. Doing so requires them to acknowledge and address both weaknesses in their claims, and strengths on the other side. Good lawyers also know how to subdivide issues into more manageable conceptual components. They can point to broader principles underlying competing positions on an issue and find points of agreement among those positions. They can see how different potential resolutions would fit into a broader conceptual framework. They can foresee the practical implications of resolving issues in different ways. And they can provide persuasive legal and scholarly support for the positions they represent.
Law students learn these skills through reading and discussing cases, statutes, and regulations. The material can seem tedious, but much of it is actually rich and complex. The traditional approach to legal education, the Socratic method, is designed to help students see this for themselves. Being called on by professors is a rite of passage for law students. It's nerve-wracking, but also effective. To avoid embarrassment, students try to anticipate the questions they will be asked, learning to analyze what they read in the ways that good lawyers do.
Most of the judicial and policy controversies that are the focus of legal study are not easy to resolve. For example, appellate courts typically face situations in which each party has some justification for its position. Although the judges ultimately choose which party should prevail, they explain how they reach their result. Good majority opinions describe the strengths and weaknesses of different positions. Concurring and dissenting opinions offer different slants on a case. Students see that there can be more than one reasonable position on a matter.
The hope is that through reading and discussing high-quality appellate opinions, law students will become good critical thinkers. Although good critical thinkers are able to decide which position on an issue is most persuasive, they are also able to see the merits of other positions. The process also shows law students that people they like and respect will be defending opposing legal and policy viewpoints. This helps students become more comfortable in adversarial situations, including public and scholarly debates over contentious issues.
Law school teaches students to care about words, too. Attention to detail is crucial for readers of legal material. Preparing for class requires methodical and thoughtful reading. Texts must be dissected and interpreted. Terms can have different meanings, as can statements made in different contexts. Students are expected to discern the possible meanings and implications of the material they read. Law students learn about the importance of careful writing, as well. By the time I entered law school, old-style legalese was on its way out. Students were encouraged to write in plain English, minimizing the technical terms and flowery language found in much of the earlier case law. [End Page 49]
These lessons were reinforced in two of my early professional positions. I was lucky to spend two years as a law clerk for U.S. District Court Judge James E. Doyle, a federal judge who was a beautiful writer and was devoted to teaching his law clerks to become better writers. I still remember some of his advice, such as "Don't tell readers that something is interesting, write about it so that they reach this conclusion themselves." After clerking, I spent a year as a Bigelow Fellow, teaching legal research and writing to first-year law students at the University of Chicago. As every teacher knows, the best way to learn something is to teach it. Evaluating student papers helped me see how to improve my own writing.
In the early years of my career, I also spent time as a National Institute of Mental Health (NIMH) postdoctoral fellow at the University of Wisconsin. The fellowship was an avenue into the world of medical ethics. I sat in on medical ethics classes and became a member of the medical school's institutional review board (IRB). I had time for research and writing, time I used to complete my first articles for law and medical ethics journals.
Bioethics was emerging as a field of study in the 1980s, and law and bioethics became my field. In 1983, I began teaching at Baylor College of Medicine. Five years later, I moved to a joint law-medicine position at Case Western Reserve University. After 10 years at Case Western, I accepted a joint law-medicine appointment at Washington University.
It's become a cliché, but the mission of legal education is to teach students to "think like a lawyer." Thinking like a lawyer can be a productive way to approach issues in many fields besides law. I know that this approach has heavily influenced my work in bioethics. In the remainder of this essay, I describe three areas in which legal training contributed to my bioethics work.
Treatment Decisions: Past and Present
As I said earlier, my interest in end-of-life issues led me to law school. I've maintained this interest for four decades now, and I've spent a lot of time thinking and writing about two specific topics: advance medical decision-making, and decision-making for incompetent patients.
Decision-Making Over Time
When I was a postdoc at the University of Wisconsin, I was invited to participate in a reading group on philosophy and psychiatry. One of our readings was a proposal from three group members that was intended to address a problem in mental health treatment. Psychiatrists Joel Howell and Ronald Diamond, joined by philosopher Dan Wikler (1982), had developed a new approach to authorizing coercive psychiatric care that they called the "voluntary commitment contract." Through making such a contract, people with bipolar disorder and other episodic mental illnesses could authorize confinement and treatment at a later time [End Page 50] when the interventions were unwanted. The claim was that contracts would offer much-needed help to individuals whose prior illness episodes had done great damage to their personal and professional well-being.
Howell, Diamond, and Wikler argued that the law should allow people to protect themselves and their loved ones from the effects of reckless behavior like excessive credit-card spending or failing to show up for work. Through making commitment contracts with their psychiatrists, people who had previously engaged in such behavior could guard against a recurrence. Such individuals could not be involuntarily treated under existing civil commitment laws, because their behavior failed to present a serious danger to themselves or others. Moreover, because they remained legally competent during much of their illness episodes, guardianship was unavailable as a vehicle for compelled treatment.
I was skeptical of the proposal. I had learned in law school that courts rarely ordered specific performance of personal service contracts, in part because of the deprivation of physical freedom that enforcement would impose. In light of this precedent, I thought courts would reject the argument that commitment contracts should be legally enforceable. In law school I had also learned about the reasons why courts and legislatures had adopted relatively demanding standards for civil commitment and guardianship. Before such standards were adopted, many people had been unjustifiably deprived of their liberty based on claims that they were mentally unsound. It seemed to me that the voluntary commitment contract was an attempt to bypass legal reforms that were meant to correct a previous injustice.
I began writing a response to the commitment contract proposal; that response eventually became my first law review publication (Dresser 1982). In making the case for the voluntary commitment contract, Howell, Diamond and Wikler argued that in the context of episodic mental illness, a future-oriented choice should override a later conflicting choice by the same person. Although commitment contracts would authorize involuntary treatment, they contended that the contracts would actually promote the autonomy of persons burdened by mental illness.
To address this claim, I needed to consider individual autonomy over time. The voluntary commitment contract was a form of self-paternalism. People entering into a contract would impose on themselves a future liberty deprivation to advance what they saw as their overall best interests. Through reading philosophical analyses of self-paternalism and self-binding, I developed an ongoing interest in future-oriented decision-making.
The work I did for the article on commitment contracts shaped some of my later thinking about living wills and advance treatment directives. As I describe in the next section, some of the issues I raised about commitment contracts applied to advance directives as well. It seemed to me that in each case, changes in an individual's circumstances could support limiting the force of an earlier treatment choice. [End Page 51]
Treatment Decisions for Incompetent Patients
In 1983, I became a professor at Baylor College of Medicine. For the first time, I was teaching law and bioethics. New professors soon realize that teaching a topic requires knowing it inside and out. To teach the course, I needed an in-depth understanding of end-of-life law. As I prepared for classes, I read the cases more thoroughly than ever before. I learned more about the patients whose treatment was at issue. I thought hard about the judges' reasoning and the stories they told about the patients. I began to see weaknesses in the conventional legal and ethical approach to decision-making on behalf of patients who had lost the ability to make their own treatment choices.
According to the conventional approach, decisions on life-sustaining treatment for incompetent patients should be guided by two standards. Family and other surrogate decision-makers, as well as clinicians caring for the patient, should begin by applying what is known as the subjective standard. The goal of this standard is to make the choice that patients themselves would make if they were competent. Applying this standard requires examining patients' prior statements and behavior indicating their beliefs and attitudes about end-of-life care. If evidence on these matters is unavailable or unclear, then surrogate decision makers and clinicians should apply the benefit-burden standard, choosing the most beneficial and least burdensome treatment option for the patient before them.
In-depth engagement with the case law led me to see problems with this approach. Court opinions applying the subjective standard included detailed descriptions of the evidence on patients' prior preferences and values. In most cases, that evidence was vague, ambiguous, and incomplete. A Massachusetts case, In re Spring (399 N.E.2d 493 (Mass. App. Ct. 1979)), is one example. In that case, dementia patient Earl Spring's family wanted the court to authorize a halt to the dialysis that was keeping him alive. They said that because he had been independent and loved outdoor activities like hunting and fishing, he would wish to stop the intrusive dialysis treatments. Judges deciding the case accepted the view that Spring would want his dialysis stopped. Yet other courts considering evidence of behavior like Spring's rejected the notion that such general behavior could establish a particular treatment preference (In re Conroy, 486 A.2d 1209 (N.J. 1985)). The cases as a whole illuminated a number of problems with relying on patients' past statements and actions to resolve questions on life-sustaining treatment.
The cases also exposed problems with the benefit-burden standard. For example, New York judges deciding In re Storar (420 N.E.2d 64 (N.Y. 1981)) presented conflicting views of the burdens and benefits of treatment for an adult patient with severe mental disabilities. John Storar had bladder cancer and was receiving blood transfusions that doctors believed would extend and improve his life. His mother wanted them stopped because she thought they were too hard on him.
Because Storar had been incompetent his entire life, the court applied the benefit-burden standard to resolve the case. But the judges didn't agree on what [End Page 52] would be best for this patient. The majority opinion concluded that although the transfusions caused Storar some distress, they also helped him feel well enough to engage in activities that gave him pleasure and enjoyment. A dissenting opinion presented a very different picture, however, one that supported Storar's mother's claim that the transfusions had severely compromised her son's quality of life. The case showed that the benefit-burden standard could lead to disparate outcomes, depending on who was evaluating the patient's situation.
Through studying cases like these, I developed my own ideas about decision-making for incompetent patients. I thought that both standards were too susceptible to manipulation, partly because decision makers weren't doing a good job evaluating patients' interests. So I published critiques of the conventional approach and suggestions for improvement (Dresser 1986, 2003).
I believed that the problems were most worrisome in cases involving conscious incompetent patients, as many scholars, including highly respected theorists like Ronald Dworkin, failed to appreciate the capacities and interests of people affected by dementia (Dresser 1995). I started to focus on that group, a group that is rapidly expanding as more people live long enough to develop Alzheimer's disease and other age-related dementias. I've written a lot about end-of-life issues in dementia care, and continue to work on this topic (Dresser 2016; Dresser and Whitehead 1994). What got me started on this was close reading of the cases. Legal training had given me the skills and substantive background to engage with the case law, and judges had provided the in-depth analysis that set me on this path.
In the late 1980s, I moved to Case Western Reserve University and began teaching law students for the first time. Law professors are expected to help with the basic law curriculum, and the school needed another criminal law teacher. I agreed to fill that slot.
First-year law courses are demanding for everyone. Students are anxious, un-schooled in the conventions of legal analysis. Professors are often anxious, too. It takes time to learn how to formulate questions that will guide students to correct answers. Engaging in Socratic dialogue before a large group of students can be stressful for students and professors alike.
Socratic teaching also requires professors to develop a deep and expansive understanding of the subject matter, for students' questions are all over the map. Professors preparing for class try to anticipate the questions they could face and figure out how to answer them. This is challenging, to say the least. New law professors often spend 10 hours or more preparing for a one-hour class.
Not surprisingly, teaching criminal law was taxing. But it was also an intellectual adventure. The field's philosophical underpinnings are complex and absorbing. [End Page 53] And I soon saw that some of them were relevant to bioethics topics. One of those topics was research misconduct.
At Case Western Reserve, I taught in both the law and medical schools. One of my medical school assignments was to help teach a science ethics course for PhD students. The course covered scientific misconduct, including data falsification, data fabrication, and plagiarism. I also served on committees reviewing cases of alleged scientific misconduct at the institution. Through participating in these activities, I became familiar with the federal regulations governing scientific misconduct, as well as controversies surrounding the regulations. It soon dawned on me that criminal law could help resolve some of those controversies.
During the late 1980s, government officials began developing an oversight system to address misconduct by scientists receiving federal funding for their research. In 1989, the U.S. Public Health Service issued regulations implementing the system. Many in the scientific community responded negatively to the regulations. One point of contention was the regulatory definition of research misconduct. As I studied the regulations and the criticism, I noticed a flaw in the definition that hadn't received much attention: the misconduct definition failed to specify the culpable mental state required for a misconduct finding.
Culpability is a fundamental element of criminal law. A defendant's mental state can determine whether the individual is innocent, or guilty and deserving of punishment. Mental state can also determine a person's offense level, and thus the severity of punishment that will be imposed. For example, someone who causes the death of another person can be guilty of murder, manslaughter, or negligent homicide, depending on the defendant's state of mind when the lethal act occurred. I thought that mental states were an essential element of research misconduct, too.
The government's initial misconduct provisions, as well as alternatives proposed by advisory groups, failed to include precise mental state requirements in their definitions of prohibited conduct. As a result, the provisions failed to spell out exactly what behavior was covered by the rules. A particular problem was the lack of clarity about negligent acts. For example, it wasn't clear whether the federal rules prohibiting plagiarism applied to individuals who carelessly but unknowingly used another person's words or ideas without proper attribution.
Immersion in criminal law helped me to see a second deficiency in the initial federal misconduct rules. Officials had failed to establish the standard of proof that applied to misconduct proceedings. The law offered three options: (1) the proof beyond a reasonable doubt standard that applies in criminal proceedings; (2) the clear and convincing evidence standard that applies in certain civil proceedings involving deprivations of liberty, such as civil commitment; and (3) the preponderance of the evidence standard that governs most civil cases. The omission of a standard of proof meant that research institutions and federal agencies were applying different proof requirements to the cases they evaluated. [End Page 54]
Legal principles were also relevant to a third component of the research misconduct system: the choice of penalties and other remedial measures following a misconduct finding. Criminal law principles governing punishment offered guidance to institutional and federal officials responding to misconduct. Understanding the roles of retribution, specific and general deterrence, incapacitation, and rehabilitation in punishment for blameworthy behavior would help officials tailor their responses to specific instances of misconduct, thus promoting fairness and effectiveness in the oversight system.
I wrote a couple of articles describing how legal concepts could be useful to those establishing and overseeing research misconduct proceedings (Dresser 1993a, 1993b). If I hadn't taught criminal law, I would not have been able to write those articles. Eventually, federal officials issued revised regulations that included mental state requirements, as well as a specific standard of proof to be applied in misconduct proceedings (Office of Science and Technology Policy 2000).
The President's Council on Bioethics
Legal training affected my experience as a member of the President's Council on Bioethics, as well. A polarizing figure, President George W. Bush created the Council when he took office in 2001. The Council's first chairman was Leon Kass, whose bioethics scholarship was both respected and a target of frequent criticism. The Council was a politically diverse group, with more conservative members than other national bioethics commissions had had.
These features alone were enough to make the Council controversial. On top of that, the first item on the Council's agenda was embryonic stem cell research. President Bush had asked the Council to consider the most contentious bioethics issue of the day.
I don't think any of us were prepared for the intense scrutiny the Council received. The Council had a much higher public profile than past national bioethics commissions had had. Journalists from national media outlets attended our meetings and perused our documents. They were joined by advocates from scientific organizations and patient groups. There were many news stories and editorials on our work, and C-Span covered several of our reports and proceedings.
Much of the commentary on the Council was negative. To many people, the Council represented an administration that was politically suspect. Some bioethicists were angry that the Council had so many conservative members. And although previous national bioethics commissions had had many members from outside the field, some doubted that a group with so many outsiders could make meaningful contributions to bioethics debates.
The negative commentary continued after the Council issued its first report, which considered human cloning (President's Council on Bioethics 2002). The report's recommendation to prohibit cloning to have children was relatively uncontested, [End Page 55] because virtually no one thought the procedure was safe enough to try in human beings. But the Council's recommendations on cloning to create human embryonic stem cells for research had a different reception.
In 2002, when the Council was preparing its cloning report, scientific organizations and patient groups were arguing that research cloning was justified by the significant health benefits it could generate. Bioethicists overwhelmingly took that position, too. But because the research would require the creation and destruction of human embryos, right-to-life groups strongly opposed it. Research proponents dismissed the right-to-life opposition, contending that any concerns research cloning raised were clearly outweighed by the moral imperative to pursue a laboratory discovery that could help patients.
Council members were divided on the research cloning question, with strong opinions on both sides. My own view was that the research could be justified under certain circumstances. I didn't think that early embryos should be assigned the same moral status as fully developed human beings. If there were strong scientific and health justifications for research cloning, I thought it should be permitted. At the same time, I could understand the cloning opponents' position. Human development is a process, and a case can be made for assigning full protection to the early embryo. It's not where I would draw the line, but I have to concede that it's a defensible moral position. I also thought there were weaknesses in the claim that cloning for research was morally required to help patients. There was a decent possibility that the knowledge scientists were hoping to gain through research cloning could be obtained through studying stem cells derived through other methods.
In short, I saw both positions as acceptable and hoped we could find a compromise that each side could live with. After much discussion and deliberation, the Council did come up with something of a compromise, although it wasn't widely recognized as such. In the cloning report, the Council made two different recommendations on research cloning. Ten members favored a proposal for "a four-year national moratorium (a temporary ban) on human cloning-for-biomedical research" (President's Council on Bioethics 2002, 231). Seven members endorsed a proposal to establish "a system of oversight and regulation that would permit cloning-for-biomedical-research to proceed promptly, but only under carefully prescribed limits" (246).
Because of my legal background, I didn't think the proposals were all that different. Establishing an oversight and regulatory system would be time-consuming, given the procedural requirements of federal administrative law and the likely debate over the system's appropriate features. The process would probably take years, perhaps even longer than the four years the moratorium would involve. So I concluded that each measure would have similar practical effects. Both would create time for further debate over the proper rules to apply to human cloning [End Page 56] for research, and both would create time for laboratory work on the search for alternative stem cell sources to continue.
Not surprisingly, many people were displeased by the Council's recommendations. It seemed callous to delay research that could deliver immense benefit to patients. Research cloning had been portrayed as miracle science that would quickly produce therapies for everything from spinal cord injury to Alzheimer's disease. Indeed, its popular name at the time was "therapeutic cloning," an exceedingly misleading term for laboratory work that was nowhere close to human application.
Neither of the Council's recommendations became policy, which is the most common fate of national bioethics commission recommendations. But a policy adopting the more restrictive recommendation of a four-year moratorium wouldn't have made much difference in the long run. In the years after the Council issued its report, induced pluripotent stem cells emerged as a satisfactory substitute for human embryonic stem cells in many research contexts (Papapetrou 2016). Although Congress barred federal agencies from funding any research that involved embryo destruction, scientists with funding from other sources continued to work on deriving stem cells from cloned embryos. It was 2013 before any of them were successful (Science 2013).
Moreover, it will be years before we know whether human embryonic stem cells or induced pluripotent stem cells lead to safe and effective human therapies. Patients who anticipated the speedy treatments research proponents predicted were misled and disappointed by those unrealistic predictions. It was a mistake to buy into the hype about stem cell research, a mistake that should encourage bioethicists to speak out when interest groups and the media engage in similar hype about other basic science discoveries.
I was sorry to see how certain members of the bioethics community responded to the Council's work. Although many offered solid, well-founded, and productive criticism of Council reports, I was shocked and embarrassed by the hostile reactions of some bioethics colleagues. Some bioethicists attacked the Council in ways that were petty and unprofessional (McGee 2003). Some signed a letter to the President criticizing changes in Council membership, a letter that was based on a one-sided account rather than a thorough investigation of the facts (Manjoo 2004; Wade 2004). Some turned their backs when, at the invitation of program planners, Leon Kass spoke at the American Society for Bioethics and Humanities Annual Meeting. It was not the field's finest hour.
In the research cloning debate, I was unwilling to dismiss either side, partly because law school had taught me to see the strengths and weaknesses of competing positions. Moreover, because I knew something about administrative law, I could see that the Council's two research cloning recommendations would have similar effects. Legal training had also prepared me to work with individuals who didn't see the world the way I did. [End Page 57]
Law school had taught me to be an advocate, but an advocate who respects people who disagree with me. It had taught me to investigate the facts relevant to a controversy, seeking information from a variety of sources. It was disturbing to learn that not everyone in bioethics subscribed to those practices. As we begin the era of a new President whose policies are bound to conflict with the views of many bioethicists, I hope that criticism of those policies will be robust, vigorous, evidence-based, and professional.
What Law Brings to Bioethics
I've described some ways that legal training affected my work in bioethics. Law school showed me the importance of careful reading and writing. It gave me a basic understanding of topics like contract law and criminal law, topics that are relevant to bioethics issues. It prepared me for working with individuals whose beliefs and values differ from mine.
Of course, this is one tale of law and bioethics, not the tale. Bioethics colleagues with legal training might make similar points, but I'm sure they would also have different thoughts on the matter. Law school affects individual students in different ways, and disciplinary training goes only so far in shaping an individual's academic and professional contributions. And we all know that legal training doesn't prevent people from becoming overly partisan and adversarial.
Yet there is no question that members of the legal profession belong in bioethics. Legal education gives individuals valuable skills, as well as substantive understanding of a system with huge effects on the resolution of bioethics questions. How people experience death and dying, reproduction, access to health care, and research participation is substantially determined by law. Bioethics needs lawyers to interpret and influence policies in these and other relevant areas. And law needs bioethicists to illuminate the moral dimensions of different policy options. I'm certain that members of both disciplines will continue to collaborate in the coming years, as science and medicine confront society with new ethical challenges.