In lieu of an abstract, here is a brief excerpt of the content:

  • Benefits, Harms, and Motives in Clinical Research
  • Nancy M. P. King (bio)

In recent years, the Report has hosted a lively scholarly conversation about moral justification and informed decision making in clinical research, including exposition of the moral norms of research, reexaminations of voluntariness and paternalism in research decision-making, and comparisons of equipoise and nonexploitation. Lynn Jansen's article in this issue adds a thorough, thoughtful exploration of altruism in patient-subjects. She closely examines the nature and implications of altruistic motivation, primary and subsidiary altruism, and the relative strength of subsidiary motives. Recognizing that consistent, accurate assessment of research subjects' motivations poses a big challenge to implementing a "motive test," Jansen nonetheless reasonably argues that identifying altruistic motivations can assist in assessing subjects' understanding, particularly by helping to determine whether patient-subjects are affected by therapeutic misconception.

Two of Jansen's arguments are especially noteworthy. One is her important and well-articulated but admittedly unfinished claim that restricting studies in which subjects are permitted to participate to those with an acceptable harm/benefit balance (or risk/potential benefit balance, if you prefer) is not paternalism, but is legitimately grounded in researchers' obligations not to exploit subjects. This argument is critical to the relationship between design and ethics in clinical research, yet it is often reduced to protection versus autonomy for subjects. Though I would put it in somewhat different terms, I wholeheartedly endorse her central tenet: that the researcher has independent obligations, based on the moral norms of science, to offer participation only in trials with appropriate harm/benefit balances.

Of course, this begs other big questions: What is "appropriate," how is that determined, and how great a role should be played by nonscientific factors like the social and cultural context of research decision-making? There is little agreement about what is an acceptable harm/benefit balance in the first place, let alone how that balance might be altered when a patient subject says, with full understanding, "I'll take that risk"—whether the motivation is altruism or a realistically faint hope of direct benefit. This area of inquiry greatly merits further attention.

Interestingly, the argument I like best is directly linked to the one I like least: Jansen's "central claim" that altruism (if it could reliably be tested for) might justify participation in riskier research than would otherwise be acceptable. This troubles me for several reasons. First, even though the claim is only theoretical, the lack of shared understanding of what counts as an acceptable harm/benefit balance is a barrier to any meaningful discussion of exceptions. Second, what about the research team's motivations? Arguably, Jesse Gelsinger was motivated by genuine altruism: according to his father, he joined a clinical research study to help save babies who shared his genetic disorder. After he died, his father came to believe that the researchers' motivations were, in contrast, entirely self-interested. Whatever the truth, the question arises whether subjects' altruism should permit more risk-taking unless the researchers are altruists, too.

This leads to yet more questions. Why are patient-subjects the only actors in the clinical research enterprise from whom altruism is expected? I suspect the therapeutic misconception could be minimized by paying patient-subjects, which would signal that researchers are making consensual use of them to provide data. As Jansen acknowledges, payment and altruism can reasonably coexist; whether payment influences understanding could be empirically tested. More importantly, why do we continually look to interrogating and controlling subjects' choices as a primary means of policing research? Why not have a more robust discussion of what should count as a fair offer to subjects, as a matter of sound science and professional integrity? Research has shown that the information provided about risks of harm and potential benefits in the consent form and process is frequently confusing at best. Why not improve disclosure first—which means coming to agreement about how to describe both benefits and harms—and thus indirectly improve both understanding and clarity of motivation?

Scrutinizing the roles and practices of researchers and IRBs often presents greater theoretical and practical challenges than assessing the decisions of research subjects. Jansen has contributed much to the conversation, yet...

pdf

Share