A confession is in order. As did almost everyone else of a certain persuasion, I recoiled when Sarah Palin invoked the notion of a "death panel" to characterize reform efforts to improve end-of-life counseling. That was wrong and unfair. But I was left uneasy by her phrase. Had I not been one of a handful of bioethicists over the years who [End Page 23] had pushed to bring the need for rationing of health care to public attention and proposed ways to carry it out? And was not a common thread running through the latter efforts the likely necessity of some kind of committee or other public mechanism to make the hard decisions? Were we not in other words talking about a "death panel," even if none of us has been so imprudent to use such a phrase? And did we not regularly bemoan the fact that politicians, left and right, would not go near the word "rationing"?
My answer to all those questions is yes, but with some important distinctions. One of them bears on the theoretical efforts to make a case for rationing and to propose means to carry it out. Another is the gap between that effort and the political realities of bringing rationing theory before the public eye. Still another is whether it is possible to envision an ethical theory that takes politics fully into account. But there is first a larger background story to be told about all that.
The larger story appropriately begins with the 1960 event that has often been thought of as the birth of bioethics. In that year, the University of Washington nephrologist Belding Scribner devised a shunt that would allow those suffering from kidney failure to be hooked up to a dialysis machine that could keep them alive for many years. But there were few of those machines and many more candidates for their use than could be accommodated. Rationing decisions of the most wrenching kind had to be made.1
The solution was a procedural one: the formation of two committees, one of them to determine the medical criteria for selecting candidates. The other was an Admissions and Policy Committee to choose, as the prominent journalist Shana Alexander wrote, "who shall live and who shall die." For four years that committee—whose membership was anonymous—made case-by-case decisions, and its general criterion was a troubling concept, that of the "social worth" of the patients. The committee had a dreadful time making such choices, and the very idea of such a committee was widely assaulted.
Dr. Scribner said later that "we had been naive" not to realize that what seemed to be the "reasonable and simple solution of . . . letting a committee of responsible members of the community choose patients" would evoke "a very serious storm of criticism."2 Among those in ethics who entered the fray were James Childress and Paul Ramsey, who contended that a random lottery solution would be more fair, and the philosopher Nicholas Rescher, who favored a utilitarian solution that tacitly seemed to accept the "social worth" standard.
The dialysis controversy finally came to an end in 1972, when Congress passed a bill providing Medicare coverage for it. Money, in short, was the way out of the moral dilemmas of committee decisions. But why, many commentators asked, did Congress not do the same with lethal conditions such as cancer and heart disease? That question was answered with silence. Consistency is not one of the behavioral traits of Congress.
So far as I know, no similar effort to have committees make life and death decisions has ever been mounted in this country. Nonetheless, among those in bioethics who have written much on rationing over the years—Norman Daniels, Leonard Fleck, Paul Menzel, Alan Buchanan, Peter Ubel, and myself, for instance—there is a fair degree of consensus. I would sum it up as follows: if not at once, then sooner or later, rationing will be necessary (the steady rise of cost inflation will necessitate it); bedside rationing will not be acceptable (too open to bias and erratic criteria); rationing will have to be done at...