My Time in Medicine
Through this autobiographical reflection on a life in medicine and bioethics, the author discovers that time is a unifying theme in his work. From his early writing on the regulation of house staff work hours and his abandonment of essentialism and the development of clinical pragmatism as a method of moral problem-solving to his scholarship on end-of-life care and disorders of consciousness, time has been a central heuristic in an effort to bridge ethical theory and clinical practice.
Autobiographical essays can be an indulgence. Often self-congratulatory and low on self-reflection, they seldom serve a purpose other than to stoke nostalgia. So when given this opportunity to write about my life in medicine and bioethics, I decided I would take stock, and not simply celebrate whatever accomplishments I might have had. Rather, I would use this opportunity to look for themes that linked the decades together. My hope was that the process might assemble the mosaic that has been my life into a discernible pattern that could only be seen from a distance, and from the vantage of historical reflection. Maybe, [End Page 19] if I was lucky, past would be prologue, and I would learn something that might help me script the next few chapters in my story.
I must confess that I was surprised by what I have come up with, and hence my predictable, but intentionally deceptive, title. This essay in not about my life in medicine, but rather about how time, as a heuristic, has informed and organized my clinical work as a doctor and my more theoretical scholarship as a bioethicist. And like all things temporal, this realization has only become apparent in retrospect. As Kierkegaard (1843) wrote:
It is perfectly true, as philosophers say, that life must be understood backwards. But they forget the other proposition, that it must be lived forwards. And if one thinks over that proposition it becomes more and more evident that life can never really be understood in time simply because at no particular moment can I find the necessary resting-place from which to understand it—backwards.(89)
I wrote what I thought was my first reflection on time and medicine for a talk I gave in Salerno back in 2010, when I was writing my book Rights Come to Mind: Brain Injury, Ethics and the Struggle for Consciousness (2015). I was reflecting on the case of Terry Wallis, who had emerged from the minimally conscious state. Having been thought vegetative for nearly two decades, he was now able to talk and communicate reliably. The problem was that he was stuck in time. A veritable Rip Van Winkle, he remained in 1984, the year of his injury. Time had stopped for him, even as it had moved on for the rest of us (Fins 2009).
Initially, this temporal lapse was a curiosity and presented practical challenges. Wallis thought his daughter, who resembled his ex-wife, was his wife. When he saw then-President George W. Bush giving a State of the Union address, he turned to his mother and asked, "What happened to Reagan?" (Fins 2015, 167). Wallis was living in what Augustine might have called "an eternal present," but ironically it was 1984. I was taken by Wallis's situation and the question of personal identity. Could one know who one was if one did not have a temporal sense of one's age or one's place in time. Yes, Wallis was Wallis, but he was temporally out of place (Fins 2015).
Bill Winslade described a similar case in his pioneering volume, Confronting Traumatic Brain Injury: Devastation, Hope and Healing (1998). The patient he profiled was "neither man, nor boy" (78–79). It struck me: could we truly have personal identity absent temporal contextualization? Heidegger (1927)—and mind you, I am not a big fan generally—did observe that der Sein exists in history and could not be ahistorical. And it turned out that such contextualization and placement in time had significant practical implications for my emerging work addressing the neuroethics of disorders of consciousness.
But one salient example. The authors of the 1994 Multi-Society Task Force on the Vegetative State made the important distinction between the persistent and permanent vegetative states. They noted that the persistent vegetative state was [End Page 20] a diagnosis, and that the permanent vegetative state was a prognosis. Again, the time constant was embedded in this important nosology, anticipating—to some extent—the seminal work of Nicholas Christakis (2000) on prognosis.
The place of time in the narratives of the patients with disorders of consciousness was all very rich. So much so, I devoted a chapter in Rights Come to Mind to "Minds, Monuments and Moments," thinking about time, self, and brain injury. It was one of my favorite parts of the book, a labor of love, and as I was thinking about this essay, I kept coming back to that chapter. I came to realize that it was a microcosm of the book, which itself used the story of one young woman's brain injury for its narrative structure.
Rights Come to Mind begins with the brain injury that Margaret (Maggie) Worthen sustained as a senior at Smith College in 2006 and continues for nearly a decade. Maggie's story, and those of 50 other families who came to Weill Cornell and Rockefeller for in-depth IRB-approved scientific study, have informed my work. I am grateful to Maggie's mother, Nancy, and all the other families who worked with us for their permission to tell their stories using narrative methods in bioethics. Maggie's fascinating story continues to teach us much about how the brain recovers from injury and its capacity to heal itself over time (Fins and Schiff 2016). Most remarkably, my colleagues have just published a paper in Science Translational Medicine that used functional neuroimaging to demonstrate longitudinal rewiring of Broca's area through concerted efforts to help Maggie reestablish functional communication using her ability to move her left eye (Thengone et al. 2016).
Maggie's history was a complicated one, and as I was writing Rights Come to Mind, I realized that the best approach was a historical one, focused on chronology. To that end, I made out timelines on yellow legal paper and tacked them on a corkboard over my desk to keep me organized. When I got stuck or frozen in the narrative, when something did not make sense, it turns out I had often gotten ahead of the story and violated a timeline. The timelines were rather simple: I used Maggie's story as the main story line and would digress by telling another's family experience or delve into the neuroscience to explain a new development or technology.
If my book had been a musical score, something also governed by time and tempo and notated in rhythms defined by time, it would have been a four-part string quartet at best. This contrasted with the multi-layered orchestral time lines that Robert Caro used to write his multi-volume biography of Lyndon Johnson (Caro 1990–2012). Caro's methods were profiled in the New York Times Magazine as I was writing my book (McGrath 2012). His timelines adorned his Spartan office and were also tacked on a corkboard to provide direction and guidance. I read Caro's 2012 profile with great interest, as I had read all four volumes of the Johnson biography when I was drafting Rights Come to Mind. I had thought that I needed a muse and hoped that Caro's fine writing might improve my own efforts. [End Page 21] I was also attracted to his historical approach to narrative and thought it might be adapted to my own purposes, albeit on a smaller scale. In retrospect, I now more fully appreciate that my book, too, was a history, a history of a brain injury. And like a historian, my thinking was governed by time and chronology.
In retrospect, as I go through the work I have done over the years, my time in medicine has been all about time. My first paper in bioethics, and my first public talk, was about the New York State Bell Commission Reforms, which governed how many hours house staff could work. The reforms were prompted by the death of Libby Zion at my hospital, then the New York Hospital, in an era when house officers worked 36 hour shifts. It was alleged that her death was the result of long work hours by unsupervised trainees. I worried how professional obligation and responsibility would be affected by a "shift" mentality, and that—absent the articulation of norms and responsibilities—medical professionalism would be governed by the clock and not by patient needs. My concern about this issue was partly because time had placed me on the cusp of the past and future. As an intern, I had trained under the ancien regime (even that phrase has the ring of time), taking call every third night and routinely working 120 hours a week. As a senior resident, new rules were put in place that limited us to 80 hours of work each week.
In theory, the reforms were long overdue, but in practice they were hastily implemented, without data about fatigue or its educational impact on trainees whose shift would "time out" before patients evolved over the first critical 36 hours after admission. Failing to see that unfold clinically robbed student doctors of the chance to see the evolution of the natural history—there's that word again—from diagnosis to treatment. How else could one hone one's diagnostic sensibility? Getting it secondhand the next morning was like reading a discharge summary, accurate but without the immediacy of decision-making that leads to learning and professional growth.
But more important to the nascent medical ethicist in me was the question of time and moral responsibility. By instituting guidelines for house staff hours without first instituting an ethical framework that would ensure that doctors in training would take care of patients even when their time was up, medicine was engaging in an important professional lapse. This realization came to me vividly when I was a senior resident on the cardiac telemetry unit, where we cared for patients with heart disease who were not sick enough to go to the Cardiac Care Unit and for those who were coming in for cardiac catheterization.
One morning I was paged to the Emergency Room to see a patient. As I was going down to the ER, a patient arrived on the telemetry unit to be admitted for cardiac catheterization. He had an unstable blockage in one of his coronary arteries. Patients like him came to us on intravenous nitroglycerin, which prevented them from having a heart attack before your eyes, so when they arrived on the unit after a bumpy ambulance ride from the boroughs, we always checked their [End Page 22] IVs to be sure they weren't kinked. It was a rookie mistake not to ensure the IV was running, so we did this routinely when patients arrived.
As I was running down to the ER, I asked my intern to check on the new patient's IV. It was after 11 am and he had already "signed out," but I had to see the emergent case downstairs. I can still see the scene vividly. In the middle of the unit, across from the nurse's station and with the patient on the gurney, tended by the paramedics, my intern extends his left wrist and points to his watch, telling me it was after 11. I don't recall what I said, but I told him I didn't care, and that he should go check the patient.
When I went down to the ER, I reflected upon what my intern had just told me. What had been wrought by the Libby Zion case, her mourning father (Sidney Zion) who was a crusading journalist, and a New York State Commission led by Bertram Bell, a doctor who did shift work as an ER doctor? Bell's was a political appointment, and he seemed to have been the wrong person to formulate a strategy for the longitudinal care needs of patients and how to train young house officers. And I told him so.
One of my first exposures to media was a live debate with Bell on Cable NBC (now CNBC). We did it again at the Hermann Biggs Society, an elite New York City public health association, which generally didn't admit newly minted fellows. I was a curiosity and a paradox. Temporally, I straddled two eras, and I was arguing against less work for my generation of doctors because I thought that regulating house staff hours, at least as envisioned, was eroding our ethics.
I subsequently wrote about this in a piece entitled, "How Many Hours?" (1990), my first piece published in the Hastings Center Report, and I spoke about it at a program convened by the New York Academy of Medicine under the aegis of David Axelrod, the legendary Commissioner of Health who prompted the Bell Commission Reforms (Fins 1991). I argued that if we were going to change the sociology of postgraduate medical education, we could not simply limit work hours. We also needed to ensure that patients received unimpeded and longitudinal care and that we developed a model of care that made this possible.
Long before more modern notions of the collaborative "medical home," I suggested that pairs of interns should be jointly responsible for a cohort of patients. In contrast to the chaotic coverage model that had on-call interns covering patients they neither knew (nor prioritized), I recommended that two interns have joint "ownership" of each other's patients. In this way, when they were covering for their mate, they were also caring for patients whom they knew and who knew them, much like a group practice. While this was the model eventually adopted a decade later, as these reforms went national, the details matter less than the importance of sustaining physicianly obligation in the face of structural changes to models of care.
It wasn't that I was against change. Rather, I felt that there were certain things that needed to be preserved in caring for patients. Doctors had an obligation to [End Page 23] those entrusted to them, and I felt that these responsibilities should be enduring across shifts and generations. Once we started eviscerating this obligation, I feared we would erode professionalism and transform medicine.
Medicine was too hard without the moral compass offered by a professional ethos. Trainees and young doctors had to believe that it was bigger than yourself, as this kept you pointing true north. Around this time, I discovered Edmund Pellegrino's (1979) writing on medical morality and what has been described as essentialism, those enduring, unchanging aspects of the doctor-patient relationship that neither evolve with time nor morph when there is a change in how care is delivered.
I really liked Ed Pellegrino and enjoyed the occasions when we met. I admired the elegance and indeed the simplicity of his message. He was, for many doctors, an ego ideal of the sort of doctor we all aspire to be. And as my own professional formation was taking place, much of what he wrote became a touchstone for how I wanted to comport myself in my emergent professional life. His world represented a kind of moral certainty and rectitude that is appealing as one's own trajectory is swiftly making the transformation from the laity to a venerable priestly tradition. I think that this is why he, among all the first-generation bioethicists, is the sentimental favorite of working docs, if they care a wit about our field.
I thought about Pellegrino recently after his death, and when I eulogized him before an annual meeting of the American Society for Bioethics and Humanities. The opportunity to reflect upon his life rekindled the feelings I had for him and the influence of his work on my life. In honoring him, I compared him to notables like Percival and Osler, who themselves made contributions to ethics in medicine (Fins 2014). Linking him to these historically important figures, although a well-intentioned honorific, was also an implicit acknowledgment that his views belonged to the past. And as appealing as they are, and were to me during my own professional evolution, they could only take you so far.
It was at this time I began to realize, if not fully articulate, that one's development in medical ethics did have a time constant, and that it evolved over time. It was a stage that I think doctors particularly go through as they get into medical ethics. First they find in Pellegrino-like arguments an affirmation of their own heightened ethical sensibility. They are comforted by the stability of enduring views coming from such a sage and revered commentator. And many stop there in their evolution, which is fine. But I could not abide by the conservatism of that stance, Pellegrino's politics aside, and I started to gravitate to a more dynamic approach to medicine and bioethics.
It was during the early 1990s, as I was finishing my fellowship in general internal medicine, starting my career at Cornell, and working part-time as the associate for medicine at the Hastings Center, that I was struggling to reconcile my deep belief in my profession (what you might call the Pellegrino strand in my moral makeup) with the nagging sense that his essentialism was too reified and static to [End Page 24] accommodate evolving notions of practice in medicine (Miller and Brody 1995, 2001). I became fascinated with the emergent field of clinical ethics consultation and care at the end of life.
As a newly minted assistant professor of medicine, I had just been appointed chair of the New York Hospital Ethics Committee, by David Skinner, who was our new CEO. Skinner was the world-class thoracic surgeon who was chair of the department of surgery at the University of Chicago when Chuck Bosk wrote his epic Forgive and Remember (1979). Skinner was also a good friend of Mark Siegler, who I later learned had vouched for me when Skinner was deciding upon my appointment (Fins and Gracia Guillén 2016).
In any case, I was starting up our ethics committee and teaching myself how to do ethics consults by the seat of my pants, commuting up to the Hastings Center a couple days a week and whenever my call schedule allowed. I went up as often as I could, especially when they convened the amazing meetings for which the Center was justly known. It was an incredible opportunity for a novice bioethicist to meet most of the founders of the field and to connect their words to their faces and personalities. I think this has been an amazing advantage, as I do not think it is always so simple to separate the man or woman from his or her work.
Besides working with Dan Callahan and Will Gaylin from the Center, to whom I will be eternally grateful for the opportunity to join the staff as an associate right out of my medical residency, I also had the chance to meet other notables. Those initial meetings made an indelible impression and influenced my work over the ensuing decades: Bill May on covenant; Al Jonsen on the relationship of ethical theory and practical judgment when he presented his classic talk on balloons and bicycles which would be later be published in the Hastings Center Report (Jonsen 1991); and Jim Childress, who was unfailingly kind and took a moment to walk over to a rather star-struck novitiate and introduce himself. These meetings, and the chance to collaborate with wonderful colleagues like Strachan Donnelley, Bruce Jennings, Susan Wolf, Jamie Nelson and Hilde Lindemann Nelson, Phil Boyle, Erik Parens, Kathy Nolan, Bette Crigger, and of course Marna Howarth, who was the librarian, den mother, and our collective confessor. My birth of bioethics began at the Center, and all these wonderful friends and colleagues helped shaped my thinking in ways that still engender deep appreciation. It was the next best thing to graduate school, and probably a whole lot more fun.
I simply loved the place and the big table in the library in the Center's second home in Garrison. Everyone had depth in their field and yet was able to talk across disciplines in a meaningful way. Today we might call it cross-training, but then it was a truly wonderful experiment in interdisciplinarity. And it was familiar to me and suited me well. Although I had gone on to medical school and specialized training, my roots were in the liberal arts and an interdisciplinary major in history, literature, and philosophy in Wesleyan University's College of Letters (COL). When asked by Paul Schwaber—one of my professors at Wesleyan, a lay [End Page 25] analyst, and Joyce scholar who wrote a wonderful volume on Ulysses (Schwaber 1999)—what the Hastings Center was like, I said "It was like COL for adults." It was a magical place, and on every drive back to Manhattan, I would have three or four ideas rattling around in my head for a new paper or project.
And yet all was not sanguine. As welcome as I felt at the Center, as the clinician, I was still an outsider. Back home at the hospital, I was also increasingly seen as different from my peers. When I was at the medical center I was the philosopher king, and when I was at Hastings I was simply a doc. In truth I was neither—but I was trying to bridge these two worlds without clear role models or a path to pursue. It all seems quaint in retrospect, now that we have clinical ethicists and even emerging qualifying criteria for their practice (Fins and Kodish et al. 2016; Kodish and Fins et al. 2013). But back then it was a struggle to have two inchoate identities that seemed to have the prospect of a perfect fit, if somehow all the pieces could ever come together.
It was an uncertain time in my life, with many powerful mentors expressing skepticism about what I was doing when I had a seemingly promising career in medicine. One professor of mine, with whom I was teaching house staff at the time, pointedly asked me what exactly I planned to do as a medical ethicist. He thought I should be a hematologist-oncologist as a way to satisfy my emerging interest in end-of-life care. It was a conventional response to my interests, but one which I felt would constrain my creativity and not do much to respond to the status quo of how patients were cared for at the end of life. So much of what was troubling about end-of-life care emanated from how oncologists treated patients with solid tumors. Joining their ranks would be to become coopted and silenced by pressures that exist from within groups.
To be effective, I needed to use my perch in medicine as a means of critique and to be an insider-outsider—not part of a hierarchical group that would silence its younger members, but not so far away from the field as to be clueless about current practices and their clinical and ethical inconsistencies. That was the plan, but it was achieved with a burden that has lingered. As a physician-ethicist, I would always be a bit orthogonal to my profession and never fully accepted by colleagues, even as I sought to help them find and restore whatever internal morality existed in the work that we shared. And conversely, as a physician, I would never be fully accepted by the good folks in the humanities who had their own codes and cultures.
Yes, the life that I had chosen—or that perhaps had chosen me—was that of a perennial outsider. Today, we would put a positive spin on it and speak of it as being an interdisciplinarian. But back then, an academic refugee. I was always looking to find a home, or if need be, create one.
I captured this dynamic in a tongue-and-cheek essay for the Hastings Center Report subtitled "Practicing on the Saw Mill River Parkway" (Fins 1995). Before we had cellphones, much less smart phones, I traveled up and back to Hastings [End Page 26] with a beeper that would invariably go off as I was in the Bronx. With nowhere to stop, and with Bonfire of the Vanities fresh in my memory (Wolfe 1987), I would wait to answer my page until I was across the border in Westchester. My destination: a lone payphone in a gravel parking lot just off the Saw Mill Parkway, halfway between Manhattan and Briarcliff Manor. That phone booth became the metaphor for my intermediate position between Cornell and Hastings and for that space between practice and theory.
The phone booth was more than a metaphor. It also suggested that I needed to connect the two realms of my life in a fashion that made sense, that was both suitably contemplative and actively productive. I enjoyed medicine, and I also enjoyed reflecting on practice. I knew it made me a better doctor and teacher, yet this sort of thinking was not always valued by my colleagues. It was soft and subjective and—if they had taken a philosophy of science course—was deemed not falsifiable, so not valuable. Similarly, my colleagues in the humanities were sometimes so divorced from the harsh and often contradictory realities of everyday life in the hospital that they conjured up frameworks that were at best illusory. I once heard a colleague at the Center say that we shouldn't let facts get in the way of a good theory. And while the comment was made tongue in cheek with an eye towards writing in a strong and convincing manner, without being bogged down by extraneous minutiae, there was a different worldview at play, prizing theory over context.
I was living a dichotomous existence and needed some sort of heuristic to bring these worlds into harmony. Theory and practice were better together. How could it be otherwise? And yet, I felt like an exile when with the theorists or practitioners. Not to simplify matters, but either I was in the wrong field, or they just did not understand.
I needed to find a way to better integrate theory and practice, and at the same time make bioethics relevant to shifting currents in clinical practice, such as the emerging end-of-life care discussions that were a central issue in bioethics in the 1990s (Fins 1999a). It was a discordant experience, and I began to deeply appreciate that ethical principles only took you so far. Essentialism and the application of principles worked when there was stasis, when there was not a conflict between one principle or another. But of course clinical ethics was dynamic, and principles came into conflict all the time. It was not enough to assert a list of principles—the more critical question was what to do when principles were in conflict, or when the prior construction of principles distorted the narrative or marginalized important details that could be dispositive to normative reasoning.
I remember being struck by the "Specifying and Balancing Principles" section in the fourth edition of the venerable Principles of Biomedical Ethics by Beauchamp and Childress, just published in 1994, and thinking that this notion would be central to my work. I was grateful for their nod to pragmatism and the issue of method, when they admitted that "Our pragmatic goal should be a method of [End Page 27] resolution that often helps, not a method that will invariably resolve our problems" (32). I liked their modesty and, more importantly, their call for a method. And yet, I did not think that their top-down, deontological a priori focus on principles was the method that those of us in the clinic needed. At least that was the way principlism had been interpreted and applied in practice, often at odds with the more sophisticated and evolved approach that Beauchamp and Childress were articulating by the fourth edition of their text.
Yet despite this evolution, more needed to be said about the relationship of principles and practice. Of course, principles were important, but how do you cultivate the narrative to know which ones applied? And how could you possibly know that there were only four principles to choose from, notwithstanding all the arguments about the common morality?
Instead, it seemed to me that their deductive reasoning needed an inductive corrective: a bottom up approach, in which the details lead to higher level ethical reflection. The transition from context to contemplation was very important, as a focus on details without a move to a normative meta-analysis rings hollow and becomes thoughtless practice. The process was analogous to diagnostic thinking in medicine, where the clinical history, examination, and laboratory data lead to an organizing diagnosis or even better, a differential diagnosis of plausible theories that might organize seemingly random signs and symptoms to explain a patient's illness. A similarly inductive way of thinking would seem to work for clinical ethics.
Moving from principlism to a more inductive approach would have the added advantage of being a familiar heuristic for medical students and my fellow clinicians. It would, I thought, resonate with a method that borrowed from how doctors normally think. And in this way, it would share a degree of sophistication with diagnostic thinking and clinical practice that was lacking, at least in the prevailing understanding and use of principlism.
I vividly remember meeting with Jack Barchas, Cornell's chair of psychiatry, as I was forming the hospital ethics committee. I was on a "listening tour" to build support for my initiative and understand the views of key stakeholders, and Barchas gave me sage advice that continues to inform how I do clinical ethics. I don't recall his exact words, but he told me not to dumb it down. He cautioned against conducting clinical discussions of ethics on a lower level than the more sophisticated conversations that might occur over a nuanced diagnosis, or the rather fierce discussions that happen at morning report when residents have to defend choices about which antibiotics they chose for a patient they admitted the night before in front of their peers and attendings. No, he advised me, for them to take you seriously, they need to be challenged by your approach and methods. You can't avoid the mess and complexity, ignore details, or seem to paint a Panglossian picture.
As I was going through this evolution and drifting towards a more pragmatic orientation that would blend theory and practice in a dynamic fashion, I was [End Page 28] very fortunate to have great colleagues and intellectual reinforcements. Right on time, to continue the importance of chronology in my life narrative, arrived Matthew Bacchetta, a just graduated masters student from the University of Virginia's religious studies and bioethics program. He had been deeply influenced by John Fletcher and Franklin G. Miller at UVA and came to work with me the summer before he started medical school at Cornell. Matt became the vector that connected me to Frank and John and their inspiring work pioneering pragmatic methods in bioethics.
Miller, Bacchetta, and I began to develop clinical pragmatism as a method of moral problem solving building upon John Dewey's theory of inquiry (Fins et al. 1997). The approach started with an evidentiary base, cultivated through a disciplined method of contextual analysis, and worked its way towards a hypothetical resolution of the problematic situation that might involve an appeal to theory. It was grounded in precedent when that was governing, but the method could also accommodate novelty and evolution (Miller, Fins, and Bacchetta 1996).
Discovering Dewey, and being mentored by Miller—as so many others have been—was a transformative time for me. And through our articulation of clinical pragmatism, I began to develop the moral vocabulary I needed to engage in normative thinking in the thick of medical practice. It gave voice to what I had long thought.
Clinical pragmatism also tapped into long-held intuitions. A couple of years ago, when my father moved out of my childhood home, I brought some books back to New York. I glanced at my college copy of Aristotle's Nicomachean Ethics and was startled to see marginalia which anticipated so much of what would later evolve into clinical pragmatism. Someone once told me that the world was divided between the Platonists and the Aristoteleans, and I had always been much more of an Aristotelean. Notwithstanding the practice of radiology, medicine's nosology always seemed to me to be a lot more like botany than looking at shadows in a cave. In discovering American Pragmatism, and refracting it for medicine, we had perhaps helped realize Aristotle's phronesis in a new guise.
These interests found their fullest expression as I became involved in the end-of-life care debates of the mid- to late 1990s. While others were consumed with debates over physician-assisted suicide, I was focused on a right to care and on helping patients making the transition from curative medicine to palliative care (Fins 2006). I thought the ambivalence of real patients and families was more worthy of my attention than the pronouncements of partisans pursuing an ideological agenda.
Questions of assisted suicide aside, simply dying was easier said than done. Despite all the bravado about transforming end-of-life care, the reality on the ground was still a challenge. As a clinical ethicist, I was helping patients and families deal with ambivalence and its handmaiden, conflict, at life's end. Generally, they were [End Page 29] not concerned about advancing the right to die, but rather grappling with human frailty and the heartbreak of loss.
Doing consults in the hospital, and trying to bridge clinical ethics and palliative care, I saw a need to help patients and families make transitions and engage in constructive goal-setting at life's end (Fins 2006), to avoid corrosive futility disputes (Fins and Solomon 2001), and to partake in advance care planning in a manner that was more relational than contractual (Fins 1999; Fins, Maltby, Friedmann et al. 2005). Central to all these efforts was an attempt to accommodate to circumstances which were prompted by the inevitable passage of time. No one expressed this better than Paul Cowan, author of the aptly titled An Orphan in History (1982), who poignantly observed that "the world is composed of the sick and the not-yet-sick" when he was first diagnosed with the leukemia that would take his life (Berger 1988).
For Cowan, the transition of past and present was compressed. The passage from wellness to grave illness was rapid. He died within a year of contracting leukemia. Yet he took advantage of reflecting backwards. He secured that resting-place that, as Kierkegaard reminds us, is so essential to understand backwards. Amidst the chaos of hospitalization and growing infirmity, he was able to opine wisely about the sick role that would rob him of middle age. His words are a fine writer's legacy that stand the test of time.
Most of us never take the opportunity to secure the resting-places that provide a refuge along life's journey, that, like the hospice of the ancient pilgrimages of the sick, provide shelter from the onslaught of life. And so I am grateful for the scholarly refuge this essay has provided to understand backwards. It is a privilege most of us do not have.
These opportunities, when they are recognized, come with a responsibility to learn and grow. We are obliged to use these reflections to live forwards, which, as Kierkegaard instructs, is what we are compelled to do. We have no other choice: Time makes anything else impossible. And so what I have learned perched at this way station, this resting-place, from this exercise in self-reflection? Taking the measure of my days, I am now convinced that the experience of the sick and their healers, our categorization of disease and nosology, the structure of medical education and the tempo of a hospital's daily life all warrant a resting-place for temporal reflection.
Time will tell if I am right. But for now, the arc of my life speaks to the need to better describe how time informs and structures the practice and experience of medicine. So as I live forward and understand backwards, I look to another departure, and more time in medicine and bioethics. [End Page 30]
The author is grateful for the editorial comments of Amy B. Ehrlich and Samantha F. Knowlton, and to Franklin G. Miller for two decades of friendship and collegiality.