Johns Hopkins University Press
  • Credit and Priority in Scientific DiscoveryA Scientist’s Perspective
ABSTRACT

Credit for scientific discovery plays a central role in the reward structure of science. As the “currency of the realm,” it powerfully influences the norms and institutional practices of the research ecosystem. Though most scientists enter the field for reasons other than desiring credit, once in the field they desire credit for their work. In addition to being a source of pleasure, credit and recognition are necessary for successful careers. The consensus among sociologists, philosophers, and economists is that pursuit of credit increases the efficiency of the scientific enterprise. Publishing results in a scholarly journal is the core approach to obtaining credit and priority, and the publishing landscape is undergoing dramatic change. As research groups get larger and more interdisciplinary, and scholarly journals proliferate, allocating credit has become more difficult. Awards and prizes further contribute to credit by recognizing prior attributions and articulating new credit attributions through their decisions. Patents can have a complex relationship to credit, and disputes over authorship and credit are common and difficult to adjudicate. Pathologic pursuit of credit adversely affects the scientific enterprise. Academic institutions assess credit in appointment and promotion decisions, and are best positioned to assume responsibility for addressing problems with the credit ecosystem. Several possible remedies are presented. [End Page 189]

Overview

Public credit for scientific discovery plays a central role in the reward structure of science, and over many years, it has powerfully influenced the norms and institutional practices of the research ecosystem. While the benefits of credit to scientists who make the most important discoveries are obvious, credit is also important to the great majority whose discoveries are of lesser importance. The basic tenets of credit are embedded deep within the background of scientific norms and culture and are often taken for granted. But as academic scientists labor daily to advance knowledge and build and maintain research careers, they cannot avoid paying attention to credit, as it is indeed “the currency of the realm” (Dasgupta and David 1994). Absent credit, it is impossible to secure appointments, promotions, research funding, access to students, and other necessities of research. The salience of credit to the research community has garnered substantial attention from sociologists, historians and philosophers of science, and economists (Arrow 1972; Biagioli 1998; Dasgupta and David 1994; Merton 1957; Strevens 2003). Through distinct analytic lenses, scholars from these disciplines have examined the role of credit in the ecosystem of science.

Periodically, structural changes in the ecosystem of science bring the issue of credit to the fore, stimulating the scientific community to reflect on how it is assessed and in turn, affects the research enterprise. One such moment was in the mid-19th century, when the emergence of science as a profession and the rise of the modern scientific journal together forged the close relationship now extant between print publication, and credit and priority (Csiszar 2018). In recent years, the research and publishing ecosystems have undergone substantial change, the latter now supplemented with (or potentially replaced by) diverse online platforms for communicating and evaluating scientific claims. This might, therefore, be an opportune time to reexamine the norms and institutions of the current approach to credit, priority, and professional recognition, to assess how well they function, and to consider changes in response to new circumstances.

In addressing this issue, I bring experience as a biomedical researcher, research administrator, and awards committee member who has dealt with these issues on various organizational levels. After reviewing several foundational questions, I will examine two sides of the credit ecosystem: how credit for discovery is allocated, and how the culture of credit in turn affects the conduct of research. Though credit is pertinent to all areas of research, details vary between disciplines. In this paper I emphasize academic research in the biosciences, the field I know best. I endorse the view that credit for discovery is an important element of the research ecosystem, exerting net positive effects on discovery. However, I see the current approaches to allocating credit as flawed. It is therefore important to develop an integrated picture of the credit landscape. Ideally, this will be useful to participants in the research ecosystem, and to those wishing to propose and guide future changes. [End Page 190]

Is the Desire for Credit a Major Motivation for Scientific Careers?

The goal of scientific research is to discover new knowledge about the world; bioscience research seeks to illuminate the function of living organisms, including our own species. Why do people choose careers in research, and in particular bioscience research? One important element is a person’s belief that he or she possesses the requisite aptitude, as determined both by inherent capacities and by exposure to educational and other experiences that bring these aptitudes to the fore. Beyond aptitude, three overlapping factors together account for most decisions to pursue a bioscience research career. The first is curiosity about living organisms, which can serve as a powerful driver; curiosity-driven quests for biological understanding can be intensely gratifying and motivating. A second motivation is more goal-oriented: a desire to make discoveries capable of producing practical benefits, such as enhancing human health. This motivates many bioscience researchers, and for some assumes a dominant role. In translating this motivation to specific career choices, some see basic science as having the greatest potential impact. Others pursue more “translational” approaches, seeking impact over shorter timelines. A third motivation has both personal and instrumental elements: research as a career producing professional and financial rewards. These motivations co-exist in varying proportions, their balance changing across career stages in response to evolving life experiences, incentives, and needs.

Beyond aptitude and these three motivations, does a desire for credit and recognition motivate the choice of a research career? It seems unlikely that desire for credit and recognition are major primary motivations for pursuing a research career, since recognition and credit can be achieved in many unrelated careers. But having entered a research career, credit assumes major salience for the vast majority of scientists. Whether initially motivated by curiosity, a desire to change the world, career advancement/financial rewards, or all three, the great majority of scientists desire credit and peer recognition for their accomplishments (and priority where appropriate). First, as mentioned above, credit and recognition are essential for obtaining funding and other necessary resources. A second reason is the prevalent normative view that people should be fairly recognized for their accomplishments. Peer recognition is indeed a source of satisfaction to scientists. As stated by Darwin, “My love of natural science . . . has been much aided by the ambition to be esteemed by my fellow naturalists” (cited in Merton 1957). A small number of scientists may truly be constitutionally indifferent to credit; others, somewhat more numerous, may deny its importance until the moment they see it unjustly denied.

Scientists Have Long Desired Credit and Recognition

Is there excessive concern with credit today, compared to an earlier time when we may have imagined scientists pursuing research solely through dispassionate [End Page 191] and disinterested concern for the truth? Bioscience research today differs in many respects from that conducted during centuries past. Among these differences are a dramatic increase in the numbers of scientists and their professionalization; the relationship of scientists to the institutions employing them; the increased cost of research and intensified competition for funds; the complexity of the publishing ecosystem; the increased necessity for research to be interdisciplinary and collaborative; and the potential rewards, financial and otherwise, available to successful researchers. It would be surprising if these profound changes didn’t affect the norms and values of the profession, including those related to credit.

But as reflected in the quote above from Darwin, the desire of scientists for credit and priority is hardly new to the culture of science. Robert Merton, the father of the “sociology of science,” addressed this in classic articles in 1957 and 1969. His 1969 paper began by comparing the terse and understated 1953 Letter to Nature in which Crick and Watson reported the discovery of DNA structure to Watson’s book The Double Helix, published in 1968 (Watson 2012; Watson and Crick 1953). The latter account described how the intense curiosity-driven search for the structure of DNA was accompanied by an equally intense race for credit and priority, details absent from the technical account. The race for scientific credit revealed in The Double Helix was not solely the result of dispassionate search for the truth. A prime indicator was the treatment by Watson and Crick of Rosalind Franklin, whose crystallographic data, shown them without her knowledge, was critical to development of their model, though credit for her contribution was not initially acknowledged (Maddox 2002; Selya 2003). Early reviews of The Double Helix expressed concern that the book did a disservice to science by knocking modern scientists off their pedestals, suggesting a moral deficiency compared to illustrious forebears. However, Merton (1969) put this narrative decisively to rest. By reviewing discoveries of many scientific giants from the 17th through the early 20th centuries, he documented that concern for priority and credit was no less intense for these luminaries of the past, despite working in utterly different cultural environments. The stories of Galileo, Newton, Leibnitz, Lister, Cavendish, Faraday, and Freud, among others, led Merton to conclude that a desire for credit, recognition, and priority was not a consequence of the modern scientific environment, but rather has been common to scientists across time (though, as today, they often denied these desires). To quote Merton, “these controversies, far from being a rare exception in science, have long been frequent, harsh and ugly” (Merton 1969).

Credit’s Role in Academic versus Industrial Scientific Environments

The role of credit is most relevant to research conducted in academic settings, such as universities and associated schools, hospitals, and institutes, where appointments, promotions, funding, and status require individual credit and external recognition for research accomplishments. The centrality of credit is quite [End Page 192] different, however, for scientists who conduct research in the biopharmaceutical industry. These PhD and MD scientists, who underwent similar training and were exposed to the same culture of credit as those pursuing academic careers, are more numerous than those in the academy. Today, less than 20% of newly minted life science PhDs take academic positions (Offord 2017).

Whereas successful academic research careers require publishing and credit, research within industry is judged by different criteria, and delivers distinct rewards. This results from the competitive advantage to companies of maintaining research confidentiality and trade secrets, and contrasts fundamentally with the norms of academic research, which emphasize public disclosure. Of course, some industry research is eventually published, and corporate cultures vary in this regard. One major example is the Bell Laboratories, a research organization associated with AT&T that has produced exceptional innovation and eight scientists who garnered Nobel Prizes. In general, industry sees publications as a secondary goal, subsidiary to the primary goal of advancing therapeutic (or diagnostic) products in a competitive marketplace. External recognition to individual scientific contributors is of lesser value to the company and may conflict with company goals. Promotion and advancement of company-based researchers does not require external recognition of their research. Instead, companies reward research contributions through promotions, salary, bonuses, or internal recognition events (Fisk 2006).

Since scientists as a group desire credit and peer recognition, why do some choose a path offering diminished opportunity for these (Roach and Sauermann 2010)? Some scientists prefer the research environment of a company, such as the emphasis on teamwork and more reliable provision of research resources. They may prefer practical therapeutic goals, as opposed to research more tenuously linked to practical public benefit. Some seek greater compensation typically offered by industry positions. Others believe they might not succeed in highly competitive academic environments, or turn to industry after initial experiences in academia failed to meet their expectations. The reliance on “soft money” and the nearly continuous need to apply for and obtain short-cycle grants at a time of low grant pay lines can be quite dispiriting. On the other hand, academics leave for industry during the course of highly successful and rewarding academic careers, when recruited for roles they find attractive. Not surprisingly, these motivations exist in various combinations among scientists pursuing research in industry. Industry scientists accept careers lacking (or having much diminished) external individual credit for research discoveries, and with much diminished freedom to choose the subject of their research. These are acceptable trade-offs for those aspects of industrial research they find attractive (Sauermann and Roach 2014). In contrast, many academic scientists trade off potential advantages of industry research for freedom to pursue questions about which they are most curious, and the satisfaction derived from peer recognition. Scientists may move [End Page 193] from academia to industry at any point in a career, from directly out of training to much later in a mature academic career. While moves occur in both directions, those from industry to the academy are less common.

The purpose of the foregoing section has been to clarify that the present inquiry into credit in research applies primarily to the academy, where most fundamental basic science discoveries arise. While much important research is conducted within industry, that research is less relevant to the present consideration of individual scientific credit.

Multiple Inputs Influence Allocation of Credit and Recognition

Although scientists desire credit (especially those in the academy, as discussed above), the factors that determine its allocation are complex, differing somewhat between fields and subfields. The foundational determinant, however, is universal: credit first requires that a discovery be publicly communicated. This may begin with scientific presentations, but eventually involves authorship on publications in scientific journals. Publications are the initial mechanism for allocating credit and recognition, though not the only one. Others include awards and prizes, inventorship on patents (for the minority of discoveries where these exist), and diverse manifestations of academic recognition. Allocation of credit may also be affected by institutional, geographic or national origins of the scientists, reflecting local knowledge, or various biases. The major factors contributing to the allocation of credit will be addressed in turn below.

Presentations and Publications

While scientists can make a discovery and never communicate it, from the perspective of science as a social enterprise that grows our base of knowledge, discoveries must be communicated, enabling others to evaluate claims, and reproduce and extend them. In earlier periods, when journals were far fewer in number and importance, oral presentations could accomplish this (Csiszar 2018). Today, presentations at scientific meetings still play a useful role in communicating research, but publication in academic journals (most often peer-reviewed) is required for definitive credit; dates of submission and publication are key determinants of priority, should that be an issue. Publications are therefore essential to establishing scientific reputations and launching and maintaining scientific careers.

Authorship and Allocation of Credit

Key factors when assessing scientific reputations are the number and perceived quality of publications, and the assessment of their originality and impact, but these are often not straightforward. The assignment of authorship on publications—who [End Page 194] is included, in what order the names appear, how the roles and extent of contributions are specified—in essence what authorship actually implies—is centrally important to allocations of credit (Harvard University and The Wellcome Trust 2012; ICMJE 2018; Rennie, Yank, and Emanuel 1997). This is simple for a transformative discovery reported in a widely read paper with a single author that is quickly confirmed and extended by others. In such cases, one simply cites the paper as the evidential basis for the allocation of credit, with the author as the deserving recipient. Unfortunately, that is far from the typical model today, as the number of contributing authors on a typical paper continues to increase (Wuchty, Jones, and Uzzi 2007).

Perhaps surprisingly, we lack broadly accepted conventions to specify the meaning of authorship as regards to type and extent of contributions by each author. Existing conventions differ tremendously between scholarly fields. Thus, it is often impossible to properly assign credit based on current authorship practices. This has led some to propose that the term author be abandoned for scientific (as opposed to literary) publications, replaced by the term contributor (Harvard University and The Wellcome Trust 2012; Rennie, Yank, and Emanuel 1997). While this proposal has some appeal, the concept of author is deeply embedded in scientific culture, and efforts to dislodge it would be challenging.

It is widely agreed that all authors should have made “substantial, direct, intellectual contributions to the work” (ICMJE 2017). To provide greater clarity and transparency, many journals now require that each author’s contributions be specified within publications, with sign-off by all parties (Campbell 1990; ICMJE 2017). Contributions may include generating the idea, conducting the work, writing and editing the paper, conducting various analyses, and so on. However, there is no quantitation of specific contributions, a major limitation. Another new authorship practice is “starred” first or last authors, which allows recognition of equally important (if distinct) roles by two or more participants, whether from the same lab, or from different collaborating labs. Though “starred” authorship designations prevent some disputes over who should be chosen as first and last authors, it is not uncommon to see jockeying for “true” first and last authorship placements, since starred status is not currently captured by bibliometric assessments.

While increased clarity regarding each author’s contributions would be extremely useful, efforts to achieve this remain variable and spotty. Conflicts occur, resulting from disagreements over individual contributions, how they should be reflected in authorship, and in extreme cases, whether contributions justify authorship at all (as opposed to acknowledgment or no acknowledgment). Authorship conventions differ markedly between disciplines, ranging from alphabetical (mathematics, economics), to descending importance (high energy physics), to listing the principal investigator (PI) last (with the last author marking a special role). Authorship/contributorship decisions are made more complex by the [End Page 195] increased prevalence of “big science” and interdisciplinary research (Wuchty, Jones, and Uzzi 2007). Because many more authors and collaborating labs now participate in a typical discovery and publication, each with distinct expertise, the complexity of determining authorship and decoding its implications for credit has increased (Greene 2007; Shen and Barabási 2014). When two, three, or more independent labs participate in a single publication, the challenge to attributing appropriate credit should be obvious, and the contributions of junior scientists and trainees may be rendered particularly anonymous.

In practice, decision-making authority on authorship is usually arrogated to the lab chief or PI, or negotiation between two or more collaborating labs. Much therefore depends on the fairness and ethical standards of these individuals, who receive little or no formal training or guidance on this topic. Suggested approaches to handling authorship considerations prospectively within and among lab groups exist, but their application is currently limited (Harvard University and The Wellcome Trust 2012).

The large number of active scientists and publications today create additional challenges to the allocation of credit. With perhaps a million publications per year in the biosciences alone, no one is capable of comprehensively reading the literature in their field of interest, let alone the relevant work outside their field. Bibliometric approaches have therefore arisen to facilitate quantification and assessment of research output and impact (Hicks et al. 2015). The core approach involves quantitating citations to each paper in the publications of other scientists, on the theory that citations reflect the extent to which the scientific community views the work as relevant or important. For context, it’s worth noting that the majority of papers are cited rarely if at all, and a relatively small fraction of all published papers receive the majority of citations.

Citation metrics are complex and influenced by numerous factors apart from the originality and importance of the reported findings (Lane 2010; Shen and Barabási 2014; Thorne 1977). For example, papers by scientists of high reputation are cited more frequently than similar papers by those of lesser fame, and papers in journals of high reputation and large readership are more highly cited than those in journals of lesser reach. Papers reporting research in fields that are more crowded or trendy are also more highly cited. Since citations refer to papers, which increasingly have many authors, it is difficult to translate a citation into assessment of a specific author’s’ contribution. Increasingly, citation rates are influenced by papers being “promoted” by scientists, their institutions, the scientific or popular press, or various social media, which need not correlate with the veracity or importance of the research. One study showed that papers covered by the New York Times garnered more scientific references than similar papers not so covered (Phillips et al. 1991). So while citation indices do generally correlate with the importance and impact of discoveries, they are highly imperfect measures. One extreme indication of how this system can fail is papers continuing to [End Page 196] be highly cited long after their conclusions have been disproven, or even retracted (Collins 2015; Halevi and Bar-Ilan 2016).

Scientific publishing has undergone dramatic changes over recent decades. These include a shift from print to digital platforms, and changes to peer review and editorial decision-making, which together determine what is published, where it is published, and when (Patterson and Schekman 2018). One fairly recent development in the biosciences, but well developed in physics, is the rapid publication of articles on preprint servers, prior to peer review (Berg et al. 2016). Preprint servers accelerate the communication of research results, reduce the (sometimes flawed) control over science communication now exerted by reviewers and editors, and have the capacity to be truly disruptive. Preprint servers may be seen as promoting scientific communication in a manner similar to oral presentations. Today, after a period of time for feedback from readers, most papers posted on preprint servers are submitted for peer review in traditional journals. It is not yet clear to what extent publication without review on a preprint server will eventually mark priority and credit, though it seems likely this will happen. It also seems likely that preprint servers, supplemented by “post-publication review,” will grow in importance, even eventually replacing the current publishing model (Vale and Hyman 2016). Additionally, new kinds of scientifically relevant data and insights, including datasets, software, and others, are being disseminated through channels outside those of traditional scholarly publishing. These contributions must be captured more effectively than they are today when allocating scientific credit.

Two specific, if unsavory, categories of authorship are honorary (guest, gift, courtesy, or prestige) and ghost authorship (Jabbehdari and Walsh 2017). Honorary authorship is granting authorship out of appreciation or respect for an individual, or in the belief that the reputation or standing of the individual would increase the likelihood of publication or status of the work. Department chairs or other leaders not materially involved in the work may expect or demand such authorship, and use their influence to obtain it. Ghost authorship occurs when someone makes a substantial contribution to the research or the writing of a paper, without being listed as an author. In one variety, a company hires a writer to prepare a manuscript, then recruits an academic to be listed as author. Today, such behaviors are considered academic misconduct by the involved faculty member.

The reproducibility and truthfulness of the published literature is less robust than previously thought (Flier 2017b; Goodman, Fanelli, and Ioannidis 2016; Ioannidis 2005). This causes misalignment between the publication-induced credit/reputation of scientists and the validity and importance of their work. Given these limitations, literature assessments and bibliometric analyses must be supplemented by objective expert opinion when assessing credit and reputation. [End Page 197]

Disputes Over Authorship and Credit Are Common and Difficult to Adjudicate

The potential adverse consequences of disputes over authorship and credit are substantial, and they may include delayed publication and career progress, as well as damage to reputations and personal relationships. When authorship and credit disputes arise, they are initially addressed within the institution where the work occurred. In most instances, the principal investigator has purview to handle these informally; if one or more parties remain unsatisfied, issues may be brought to the attention of the academic department, and if necessary, other institutional officials. Authorship disputes may be brought to journal editors, but they typically view such issues as outside their responsibility (Harvard University and The Wellcome Trust 2012).

Many institutions employ ombudspersons, officials whose responsibilities include involvement in such questions (Shelton 2000). With a full-time faculty of over 10,000, the Harvard Medical Area ombudsperson met with 443 individuals in AY2018, and of these approximately 70 came with authorship concerns. Whether this suggests that the prevalence of such issues is relatively low, or represents the tip of an iceberg, is difficult to know, though I suspect the latter. The role of the impartial ombudsperson is to provide an opportunity for visitors to identify and discuss their issues, goals, and options for next steps in a confidential, independent, and informal setting. An ombudsperson may coach a visitor on how to conduct a difficult authorship conversation and may also facilitate such conversations, with the goal of increasing understanding and resolution. When not resolvable in this manner, an ombudsperson may suggest more formal options, such as referral to a dean for research or other institutional official. Ultimately, the visitor decides next steps. Many seeking guidance fear the negative consequences of formally challenging their supervisors, given their critical role in future career opportunities. As a result, relatively few cases end up on the desks of deans for research integrity or individuals with similar responsibilities.

Resolution of such disputes is often quite challenging. This results from unequal power relationships among disputants, varying quality of institutional advisory and investigatory processes, and lack of clarity about prevailing authorship criteria. These combine to produce outcomes that may seem unfair to those involved, especially those of junior status. The objectivity of such reviews may also be influenced by conflicting institutional interests, such as a desire to protect the reputation and funding of a successful faculty member, or the financial interests of the lab head or the institution in patents or business arrangements linked to discoveries and publications. Finally, while publications are the public record of research, disputes over authorship are treated as highly confidential by institutions. Unless publicized by one or more disgruntled parties, or (much more rarely) brought to adjudication by the legal system, details of such disputes or their very existence typically remain unknown to the scientific community; most are never effectively resolved (Wager 2009). Credit disputes are more likely [End Page 198] to garner attention when related to discoveries that have changed a field, resulted in new treatments for disease, or garnered major awards and financial benefits for the discoverer(s). But disagreements over credit for lesser discoveries also occur; these are important to the participants, if less so to the broader community.

Inquiries contemporaneous to authorship disputes are technically easier to conduct due to access to key participants, but from a social perspective they tend to be more fraught. Prominent scientists may face adverse reputational and financial consequences from credit controversies. Those accorded credit are often well connected and influential, and they vigorously resist reconsideration of credit previously accorded to them. Participants with greatest knowledge of the facts may be disinclined to initiate or cooperate with such inquiries, fearing controversy or retribution. Consequently, senior scientists with special knowledge and interest in a particular discovery, journalists, or historians of science are the most likely to initiate inquiries questioning prior credit assignments.

Allocation of credit for discoveries, and the linked reputation of scientists, can change over time, as new knowledge and facts emerge. Revision of the historical record may occur relatively soon after a discovery is reported, or decades or centuries later. These revisions may become fertile topics of PhD dissertations, books, and tenure decisions for historians of science (Maddox 2003; Selya 2003). As stated by Merton (1957), “history serves as an appellate court” capable of reversing prior judgments, at least for the most important discoveries. In these cases, final recognition may be “allocated by those guardians of posthumous fame, the historians of science,” whose well-documented judgments are typically rendered at a time when the participants are unable to protest or applaud their conclusions.

Awards and Prizes

Awards and prizes have been employed to recognize scientific accomplishment since the 18th century (Zuckerman 1992). Their role is complex, as they both recognize and mark preexisting attributions of credit, and articulate and celebrate new attributions of credit through their decisions. In addition to honoring past achievements, they promote future research by incentivizing researchers who value peer recognition and financial rewards, whether for personal use or to support their research.

As the amount of research activity has expanded over recent decades, so too have the number, identity, and impact of awards and prizes (Zuckerman 1992), which vary greatly in their goals, methodologies, reputation, and impact. Some recognize “career accomplishments” in specific fields (and may focus on early or late career), while others, such as the Nobel Prize, recognize specific discoveries in stipulated fields. Although most scientists never win awards, and only a tiny number win or are considered for the most prestigious among them, awards incentivize the pursuit of science and benefit the research enterprise by bringing it [End Page 199] favorable attention. Since most academic research is publicly funded, favorable public attention is an important factor in determining the level of support.

Awards and prizes differ greatly in importance. The reputation accorded specific awards is influenced by their record for rendering high-quality, impactful selections that have stood the test of time, the eminence of the awarding bodies and selection panels, and to some extent the financial magnitude of the prizes. The Nobel Prize stands above all others for its impact and recognition. There is a trend to new prizes with very large monetary awards, designed to bring glory to both the recipients and the backers of the prize. A recent example is the Breakthrough Prize (Breakthrough Prize 2018).

Awards and prizes both reflect and provide new inputs to the allocation of credit for discovery. How well do they fulfill this function? No simple answer is possible. The mechanisms and criteria for selecting recipients differ for each award. Potential awardees must first be nominated; some nominations are publicly solicited, others not. Nominations for the Nobel Prize can only be made by individuals requested to do so by the Nobel Assembly. Factors apart from excellence of research, such as gender and institutional affiliation, likely influence nomination for specific awards (Lincoln et al. 2012). Once nominated, research (and researchers) are scrutinized by specific panels; external expert opinions may be sought.

What do these panels actually do? While details vary, there are common features. Selection panels are tasked with assessing the scientific importance and impact of the work (its prize-worthiness), the validity of the claims being recognized (to avoid mistakes such as irreproducibility), and the discovery’s originality (a major criterion underlying peer recognition in science). All this is done while comparing the relative merits of diverse and scientifically unique discoveries. Selection committees choose winners from larger groups whose members might reasonably be considered for recognition for the same or related work. Having sat on and chaired several such panels, it is evident that many assessments evoke vigorous disagreements among panel members, requiring repeated votes to finalize choices. The appropriate choice of recipients is made more challenging by limits on the maximum number of awardees for specific awards (for example, a limit of three for each Nobel Prize). Another key variable is how citations are articulated. They may be articulated narrowly with great specificity or more broadly, encompassing a field or group of related discoveries. Such factors affect the choice of recipients and reflect issues of scientific taste on which scientists often disagree, as much as objective assessment of the originality and importance of the research.

Not uncommonly, different elite awards choose distinct or overlapping sets of awardees for the “same discovery.” This may reflect competing views about the most critical elements of a discovery, or different perspectives (perhaps based on different sources of information) on who among those involved most deserves recognition. That such differences exist is hardly surprising. They may also reflect [End Page 200] the different points in time after a discovery at which the awards are made. With the passage of time, both the prize-worthiness of an award, and the choice of recipients, may be called into question. For example, since the first Nobel Prizes in 1901, this preeminent award has generated its share of controversy. Some controversies have related to the choice of discoveries that later proved unworthy (such as the 1949 Prize in Physiology or Medicine for development of prefrontal lobotomy for mental illness), others to the contested choice of winners for a worthy discovery, as when a scientist judged to have merited recognition at least equal to that of awardees is excluded.

Although recipients of awards and prizes are selected by scientific judges after objective analysis and debate, selections are also influenced by less objective factors. These include the preexisting reputation of potential winners (the Matthew effect), gender (the Matilda effect), the influence of institutions or countries where the work was conducted, or the social networks, interests, and biases of influential award committee members, especially prior recipients of the most important awards, whose opinions carry disproportionate weight (Lincoln et al. 2012; Merton 1968). “Campaigns” to influence selections, conducted by scientists, their friends, or institutions, are well known to occur. The Nobel Prizes explicitly discourage campaigns, and though they do occur, their impact is hard to assess. Scientists also differ in personality in ways that likely influence the potential of their receiving awards and credit. Some scientists take an aggressive interest in their own promotion, while others do not.

Since award selection deliberations are confidential, the scientific community and public are unaware of issues adjudicated during selection. For the Nobel Prize, records of deliberations are closed to public scrutiny for 50 years. As a result, awards and prizes stand on the historical record of the quality of their selections. Independent reviews of the accuracy/quality of specific awards are rarely conducted, though historians of science take an interest in this topic. Critiques of specific award choices arise most often from journalists, who in the course of reporting on major awards seek opinions of experts with different perspectives.

The increased prevalence and increasing necessity of team science raise issues for the future of awards. When a team produces a prize-worthy discovery, should one team member be selected to receive the award, or should the entire team be recognized? Within a team, those recognized are most often research leaders or PIs. This reflects their primary responsibility for the conduct of the work, and their sustained accomplishment towards the goal, in contrast to the more limited contributions, in time and substance, by most trainees and other associates. On the other hand, trainees and associates sometimes provide critical insights to important discoveries. Not surprisingly, failure to recognize coauthors and collaborators has engendered disputes and disagreements (Flier 2019). Importantly, even when they do not receive awards, scientists associated with prize-winning discoveries benefit professionally from credit derived through these associations. [End Page 201] Institutions also covet and derive important reputational benefits from awards granted to their faculty.

Taken together, awards and prizes contribute importantly to the allocation of credit in research, especially at the highest levels. But though award choices are most often well considered, they are subject to error and debate, and eventual historical correction. In the future, awards and prizes will likely need to adapt their approaches to changes in the conduct of research.

Appointments, Promotions, Tenure, and Other Means for Allocating Credit

Academic research careers require securing academic appointments, and then for the most successful, promotion, ideally to a tenured position. Promotion decisions assess research and contributions to teaching and institutional service (as well as clinical contributions for clinician-scientists), and aim to assess overall peer recognition and reputation. However, to the extent that faculty are judged for their research, a core criterion for advancement is how they are credited for research contributions, which shapes their scientific reputations. Accordingly, institutional officials are “consumers” of publication metrics and awards, each with limitations as described above. Institutions supplement this information with confidential assessments by internal and external experts who are asked to provide objective and personal views of a candidate’s contributions and reputation. These assessments, not infrequently discordant, are further discussed by committees of institutional scholars tasked to provide recommendations to deans and other academic leaders responsible for final promotion decisions. Although these processes share many common features, details differ between institutions and across disciplines. While the quality of outcomes can be examined using publicly available information, the processes themselves—the evidence obtained, and the substance of discussions—remain confidential and unavailable for independent analysis.

Credit and reputation are also recognized through honorary professorships and degrees, and election to honorary societies. Such recognition may link directly to specific discoveries, but even when not, credit for discovery influences selections.

Another venerable form of recognition is eponymy, whereby the name of a scientist becomes affixed to all or parts of what he or she has discovered (Merton 1957). Names can be applied to entities as disparate as epochs (Darwinian), “fathering” a new field (Morgagni, the father of pathology), diseases (Cushings disease), and even parts of anatomy (Eustachian tube). Eponymy arises spontaneously over time, rather than being proposed or endorsed by any official body, and its use as a reward seems less common in recent years. [End Page 202]

Patents and Credit

Patents are not relevant to most credit allocations, since the vast majority of research publications do not involve patentable discoveries, and many patented discoveries (especially those from industry) are not associated with papers published in academic journals. However, when discoveries do lead to patents, they can influence how credit for discovery is allocated. For example, it would be difficult to deny “academic” credit for an important discovery to a scientist judged to be the sole inventor of the relevant technology. It is clear, however, that patents and credit subserve distinct functions. Legal scholar Catherine Fisk (2006) compared the functions of credit for discovery to assignment of intellectual property via patent law. Intellectual property/patent considerations enable commercial development, the purpose for which patents arose as a legal form. On the other hand, credit is essential to establishing scientific reputations, thereby promoting human capital. What is the relative importance of scientific credit versus intellectual property in the career of a typical scientist? The economic value of credit and reputation is exercised throughout a scientist’s career, and Fisk found that for the vast majority of scientists, it exceeds the economic value from patents. In rare instances, however, financial payoffs to academic scientists from inventorship on patents are very large.

Academic discoveries lead to patents when the discoverer/inventor(s) and their institution (owners of the patent rights through previously signed employment agreements), judge the discovery as having potential to produce practical outcomes with commercial benefit. A potentially patentable discovery triggers disclosure to institutional officials, who assign experts to objectively determine patentability (which requires novelty, sufficient detail for others to reproduce the discovery, and non-obviousness), and to identify the inventors. In parallel, potential commercial value is assessed—if judged to be insufficient, a patent will likely not be submitted.

Since inventorship has financial implications, its determination typically involves review of documentation and lab notebooks and interviews with key participants. It is important that the “correct” inventors are named, since claims of inappropriate exclusion (or inclusion) of inventors undermines a patent’s strength (Seymore 2006). Unlike the formal inquiries undertaken to certify appropriate “inventors,” no parallel institutional process exists for allocating authorship and credit, either prospectively or in response to conflicts. The criteria for authorship may overlap with—but are quite distinct from—criteria for assignment of inventorship (Haeussler and Sauermann 2013; Lissoni, Montobbio, and Zirulia 2013). Many or most individuals granted authorship through contributions to a discovery are not listed as inventors on a related patent. Financial implications of inventorship on patentable discoveries might influence authorship decisions, but how often this occurs is unknown. [End Page 203]

Credit versus Responsibility

It should never be forgotten that credit also entails responsibility for the claims of discovery (Rennie, Yank, and Emanuel 1997). In the case of a single-authored paper, this is obvious. But just as credit is more difficult to attribute for research involving large, interdisciplinary teams, so too is allocation of responsibility more difficult when the veracity or integrity of multiauthored research is questioned. It may be challenging for “senior authors” to meaningfully vouch for the veracity of every element of a complex, multiauthored project that crosses disciplinary boundaries. That being the case, to what extent should senior authors be allocated credit for work they cannot fully evaluate and vouch for? It has been proposed that one author should be required to serve as a “guarantor” of the entire paper, which might induce constructive behaviors (Rennie, Yank, and Emanuel 1997). There are many examples of senior authors who were eager to accept credit for a study they participated in or led, but reluctant to accept responsibility when problems arose, instead blaming coauthors for mistakes or misconduct (Hunt 1981; Kolata 2018). The risk-and-benefit calculations of such scenarios are asymmetric: the attraction of credit greatly outweighs concerns about assumption of responsibility should issues of veracity or misconduct arise.

Consequences of Credit and Priority for the Scientific Enterprise

Once accepted that the great majority of scientists (at least those in academia) desire credit, it remains to be determined whether a scientific culture steeped in credit and priority promotes or harms the overall scientific enterprise. In this regard, we should distinguish between the consequences of credit for individual scientists and for the scientific enterprise as a whole.

As the most important “currency of the realm” in the academy, credit for discovery and subsequent effects on reputation are critically important to individual scientists. Although scientists vary in the extent to which credit-seeking motivates their work, very few are indifferent to credit when they believe it is their due. Recognition of research accomplishments and subsequent reputational benefits are necessary to obtain jobs and job security, and they influence compensation, promotion, research funding, space, and other institutional support. Credit and reputation also influence the quality of a scientist’s trainees—who conduct most of the work and contribute many new ideas—and the willingness of collaborators to join in common efforts. In this way, credit and reputation beget further success and credit over time.

But do the pursuit of credit and priority promote the overall productivity of science? If the scientific community was driven solely by curiosity and mission (if that could be achieved), would scientific knowledge be advanced more effectively? This important question has long been of interest to sociologists, philosophers, [End Page 204] and economists. Sociologist Robert Merton concluded that the desire for credit derives substantially from the prevailing norms of science. Based on his familiarity with scientific work, methods, and organization, Merton (1957) described four such norms: common ownership of scientific results that are shared freely without secrecy; universalism and objectivity as standards by which scientific results are evaluated; disinterestedness, whereby self-interest doesn’t subvert the other norms; and organized skepticism, the detached scrutiny of beliefs by empirical and logical criteria. Though much has been written about these norms, including whether they are truly universal and whether additional norms should be added, the impact of Merton’s assessment has been durable. Merton believed these norms exerted “pressure upon scientists to assert their claims, and this goes far toward explaining the seeming paradox that even those meek and unaggressive men, ordinarily slow to press their own claims in other spheres of life, will often do so in their scientific work.”

Philosopher Michael Strevens (2003) examined the role of one specific norm, the “priority rule,” by which (most) credit goes to the first individual reporting a discovery. Like Merton, Strevens saw this reward structure as maximizing the beneficial social output of research. Strevens argued that social arrangements that disproportionately reward priority incentivize the efficient use of resources, since the greatest social benefits of a discovery derive from its first instance, even when a second independent instance follows a short time later. This is true even though the timing of the two independent discoveries likely resulted from good fortune, rather than superior brilliance or skill of the winner. Strevens concluded that dominant rewards going to the first discovery produces the greatest net benefits to society. While a “race” might waste resources spent by the “losers,” he believed it increased the likelihood of an important discovery being made by at least one of the competing groups. A second instance of a discovery is hardly without value, however, if it provides valuable confirmation, clarification, or correction to the initial discovery.

Economics is another field that has examined the role of credit and priority in the scientific enterprise (Dasgupta and David 1994). Economists recognize the importance of science to the growth and competitiveness of modern economies, and they have been interested in how the norms and behaviors of research influence the “allocative efficiency” of the ecosystem. In 1962, Kenneth Arrow considered how society could best extract value from the discoveries of basic science. He recognized the difficulty in predicting and identifying the links between basic discoveries and downstream creation of social value. This, together with difficulty assigning ownership of academic discoveries, would limit the efficient extraction of value from academic research, creating a “market failure.”

Incentives play a role and are anticipated to outperform a system where scientists are rewarded only for effort or dedication. Two features of allocative efficiency are speed of discovery and efficient dissemination. Regarding speed, [End Page 205] the desire for priority causes research programs to run as races. To function efficiently, races must be appropriately incentivized and rewarded. Some believe this paradigm operates most efficiently when scientists receive a reliable base salary (to reduce the risk of pursuing science when only winners are compensated), supplemented by peer recognition, grants, prizes, promotions/positions, and enhanced compensation—all linked to their success in discovery (Dasgupta and David 1994; Strevens 2003). How do these economic insights relate to the ways in which credit and priority are actually organized? From the economic perspective, discovering and publishing quickly facilitates receiving credit, and simultaneously serves allocative efficiency by countering the competitive advantages of secrecy (as seen in industry). Publication also enables others to apply discoveries in economically useful ways.

Thus, the consensus among sociologists, philosophers of science, and economists is that the pursuit of credit by scientists, and the systems and norms that enable it, enhance the overall efficiency of the scientific enterprise.

Excessive or Pathologic Pursuit of Credit

Notwithstanding the conclusions of sociologists, philosophers, and economists that scientific norms and behaviors favoring the allocation of credit benefit the research enterprise, a view with which I concur, it seems clear that exaggerated or pathologic credit-seeking behavior undercuts discovery, diminishes public respect for science, and may reduce the pleasure of pursuing science.

It is difficult to define a precise border between healthy and productive credit-seeking behaviors and those that are excessive or pathologic. In its most extreme varieties, pathologic interest in credit and recognition undermines the primary commitment to the truth. How might this occur? Excessive desire for credit may lead susceptible scientists to promote or tolerate sloppiness and errors, or engage in one of the hallmarks of scientific misconduct—plagiarism, falsification, or fabrication—even if such “credit” is in the end short-lived. There is of course no moral justification for credit-seeking that leads scientists to abandon their fealty to the truth. But history shows that pathologic desire for credit drives a small minority of scientists to this deeply unfortunate outcome. The scientific community must take seriously behaviors that violate the norms of science via exaggerated pursuit of credit, and devise and deploy approaches to reduce these unfortunate outcomes.

Possible Approaches to Improving the Credit Ecosystem

In this paper, I have examined the system by which credit for scientific discovery is allocated, with a focus on research in the biomedical sciences. The inquiry has been placed into historical context, including perspectives from within the [End Page 206] scientific community and from disciplines including history, sociology, philosophy of science, and economics. Both the daily operations and the culture of academic biomedical research are deeply connected to the pursuit of credit and recognition. This follows from the longstanding—if still evolving—norms of science, from incentives to which these norms are linked, and from human nature, whose attributes are derived from both evolutionary forces and broader cultural influences.

Prevailing norms for obtaining and allocating credit are functional and have proven to be relatively durable, but they are imperfect. When problems occur, they are generally addressed by negotiation, multiple levels of peer review, occasional institutional intervention, and for a limited number of the most important cases, by historical judgment. But many scientists appear to be unhappy with how credit and recognition are allocated today. Major changes over recent decades in the way research is conducted, communicated, and managed have not yet been sufficiently reflected in changes to credit norms, and this is likely the root of this problem. These issues deserve our attention.

Changing long-established systems for allocating credit will be difficult. The underlying events play out across diverse and independent institutions. They are linked at multiple levels to ongoing incentives and behaviors and are seen to perform well enough by many participants. Aware of these challenges, I suggest a number of ideas for consideration.

Academic Institutions Must Assume Greater Responsibility for the Allocation of Credit

Credit for discovery is the “currency of the realm”, and the realm in which the currency operates is the community of academic institutions. Just as governments issuing financial currency are responsible for limiting counterfeit varieties, so must the academy assume greater responsibility for limiting counterfeit academic credit.

Academic researchers are employed by organizations whose institutional reputations reflect the accomplishments of their faculties. How scientists relate to these institutions may not be obvious to outsiders. Although employees, faculty members generate most of the direct costs of their research (and in many cases most of their compensation) by applying for grants, contracts, and gifts from external funders. In turn, the organizations (universities, schools, hospitals, institutes) provide research space and a variety of scientific, financial, and administrative support systems. Institutions also establish, support, and oversee the academic departments within which appointments/promotions and academic advancement take place. Advancement decisions require institutional assessment of credit and recognition accorded to their faculty. Improving these systems is a core institutional responsibility. [End Page 207]

Publications and Authorship

As stated repeatedly in this paper, publications are at the center of systems for allocating academic credit. Not surprisingly, most institutions have written policies regarding authorship, similar or identical to those proffered by organizations such as the ICMJE (2018). Some institutions establish local variations on these standards. How are these policies and standards transmitted to their scientific communities? Courses in the responsible conduct of research required by National Institutes of Health and other agencies expose trainees (but typically not faculty) to what today are limited and ambiguous authorship standards (NIH Office of Extramural Research 2009).

Despite a high prevalence of uncertainty and disagreement over authorship and credit, I believe institutions treat these issues with insufficient seriousness and put insufficient effort into proactively addressing their root causes. For example, Harvard Medical School (HMS) has standing faculty committees to address both scientific and professional misconduct and conflict of interest and commitment. But despite authorship being far more central to faculty life than misconduct or conflict of interest, there is no standing committee to address authorship and credit. Today, authorship/credit disputes rarely rise to the level of the institutional misconduct committee. This likely results from the “norm” that such issues are the responsibility of the PI or corresponding/responsible author, and the fact that challenging such decisions, even in extreme cases, is likely perilous for those who consider doing so, consequent to asymmetric power relationships.

The existence of a dedicated committee would signal that the institution deems the subject to be important and would facilitate reexamination and codification of expectations for authorship processes, behavior, and remedies. Academic institutions should periodically update authorship policies in a deliberative fashion responsive to current realities and make efforts to ensure they are understood by all participants. This would likely require training modules to present standards and expectations for authorship, with carefully crafted examples to illustrate common problems. Authorship should be discussed prospectively with participants from the earliest stages of a project and reviewed as necessary along the path to publication.

One cause of disputes over authorship may be dysfunctional research environments characterized by low degrees of trust. While such environments may result from faculty who are ungenerous or unethical, they more likely result from failure to provide clear expectations, ongoing communications, and oversight in the context of unclear institutional expectations.

Academic departments rarely provide such expectations or training today; authorship standards are rarely proactively addressed, and they vary from one scientist, department, and institution to another. When concerns arise, it is often difficult to obtain guidance or resolution, especially for those of junior status. Institutions should establish and advertise pathways for conflict resolution beyond [End Page 208] those now offered by ombudspersons, who though highly valuable, are themselves insufficient to the task.

One personal anecdote may be illuminating. In researching this paper, I obtained the HMS authorship guidelines, adopted in 1999. Available online, these guidelines state that independent of requests by journals, primary authors should “prepare a concise written description of their contributions to the work, which has been approved by all authors. This record should remain with the sponsoring department” (HMS Office for Academic and Research Integrity 1999). This was the precise proposal I had intended to (and will) make in this paper, but despite being the school’s dean for nine years and a faculty scientist for many more, I was unaware of this policy. An informal survey provided no evidence that faculty or administrators are aware of this policy.

This anecdote suggests several important lessons. First, the initial reaction to recognizing many problems is to call for creation of new policies, which may be needed. But as illustrated by this case, it’s common for problems to reflect the failure to be aware of, understand, implement, or enforce existing policies. The obvious question is: why did this policy adopted in 1999 fail to achieve its intended purpose? In this case, the high-level answer is clear: over the last 20 years, despite the policy being adopted by a faculty committee and approved by the faculty council, the institution at numerous levels hasn’t viewed the behaviors linked to the policy as sufficiently important to ensure they are brought to the attention of faculty, academic leaders, and administrators tasked with overseeing faculty behaviors and evaluations. This may not be surprising. During this period, much greater attention has been placed on financial conflict of interest and research misconduct. Problems in these areas have been regularly highlighted by the press, bringing pressure on institutions to respond. In response, institutional policies on conflict of interest and misconduct became formalized, linked to required disclosures, defined administrative procedures, and specific remedies. In contrast, authorship and credit weren’t deemed particularly pressing and were seen largely as the province of individual faculty and departments. As a consequence, the specific authorship policy was “forgotten” by all parties and hasn’t been updated since 1999.

Given that authorship is the “currency of the realm”, there are perceived problems with its allocation, the integrity of this currency is central to the mission of the academy, and alternative organizations (such as journals and funders) are incapable of addressing these issues, academic institutions must assume a new more serious approach to issues of authorship, taking responsibility for remediating disputes when these are unresolvable at local levels. Institutions that have no relevant policies and guidelines should develop them, and those that have them should update them. Educational programs for faculty and trainees should be developed and implemented, including training modules reflecting the range of issues that arise and paths to dispute resolution. Central to accomplishing this, and [End Page 209] as suggested by the 1999 HMS policy, faculty should produce written documentation of detailed contributions to the work, approved by participants, updated when necessary, and preserved in the records of the academic department.

Going forward, such records should be included in confidential dossiers for evaluating promotions and be available to chairs and deans should authorship issues arise. Publicizing this new expectation, and creating a new standing committee of senior faculty to revise and monitor implementation of the policies, would go a long way towards establishing expectations. Making promotions contingent on review of such materials would represent a powerful signal. The policy would surely represent a change from the status quo, and it’s likely some faculty will see this as an administrative intrusion upon their independent prerogatives as principal investigators. But if faculty expect to be rewarded by institutions for their contributions as authors, strengthening legitimate claims to credit by all contributors should in the end be seen as beneficial. After an expected period of resistance or confusion, new norms would arise.

Appointments and Promotions

As discussed above, one way to use the process of appointment and promotion to strengthen the system for allocating credit is to require faculty to formally record the detailed contributions to authorship of their publications, in documents available to evaluation committees. More systematic evolution of the criteria for promotion might also be considered, to better reflect the expanding role for interdisciplinary and multi-lab research. In the not-so-distant past, and perhaps in some quarters today, credit for promotion in biomedicine required predominantly or exclusively first- or last-authored publications. This approach limited the ability of many collaborative researchers, and most clinical investigators, to be recognized through promotion, especially to the highest levels. It also reduced the incentive to undertake such research. Increasingly, institutions are revising criteria to allow recognition of well-documented contributions by those not first or last authors. HMS revised its promotions criteria 12 years ago, and having chaired senior promotions for nine years, the effect on promotion of faculty pursuing collaborative research has been noticeable, though the detailed consequences of this transition have yet to be objectively studied. An additional approach would be to require more detailed elaboration of the nature and extent of contributions to multi-lab research to better reflect the complexity of much modern science, assigning accurate credit for contributions by all team members. Efforts might also be made to design new metrics for promoting scientists whose contributions, though largely collaborative, are essential and of high impact, conceivably through new faculty promotion criteria for such individuals.

In evaluating faculty for promotion, it is essential to review contributions to the published literature and assess the number and impact of papers. Given the increasing concern for irreproducibility of the published literature, it will be important [End Page 210] to place increased focus on the reproducibility of published work, exploring situations where reproducibility of cited work hasn’t been demonstrated or has been questioned. In my experience, questions about reproducibility do arise in the course of reviews, but are not reliably sought during the review process (Flier 2017a).

Awards

As discussed above, awards and prizes are important components of the reward system of science, conferring benefits upon both individual scientists and the broader scientific enterprise. As voluntary activities of diverse organizations interested in science and medicine, their goals and procedures will always be diverse. In light of the changing landscape of research, some changes might be considered for future awards.

Number of awardees

It is understandable that awards and prizes typically restrict the maximum number of recipients. This permits focused attention on one or a few people, so recognition can be undiluted. When the “right” people are chosen for a particular citation, this is an appropriate approach. It worked well for many years during which most discoveries were the product of singular individuals or small groups. While singular individuals and small groups will likely continue to deserve credit for discoveries in the future, the increased importance of interdisciplinary and large-group science suggests a need for additional award options. For example, if five people were truly responsible for a particular discovery (however determined), there is no good reason to limit recognition to only three, or to forego an award for that reason. Even when the number of awardees is a longstanding tradition, as for the Nobel Prizes, consideration should be given to this change.

Nature of awardees

Overwhelmingly, awards are made to individuals. This recognizes the undeniable role of individual creativity and persistence as drivers of discovery, which will remain important in the future. But as science has changed, some awards should be made to research teams, rather than individual scientists. This would formally recognize that, in appropriate instances, teams were essential to a discovery in a way that no single individual could be. In such cases, while specific individuals could receive the award on behalf of a team, it would be the team that is acknowledged by the award. The 1985 Nobel Peace Prize was awarded to the organization International Physicians for the Prevention of Nuclear War, creating a precedent for this in the sciences. I was involved in a recent award in which the committee considered recognizing, by name, a for-profit biopharmaceutical company for a therapeutic achievement in which they played a centrally important role. No one doubted the prize worthiness, but several panel members objected on principle to granting the award to a company, preferring that it go instead to one scientist from the company. In the appropriate instances (this being one in my opinion), it would be forward-thinking to award a prize to [End Page 211] a company in recognition of its critical contributions to a discovery or its implementation. Criteria and standards would be no different from those employed to choose any other team.

Posthumous awards

Awards are typically made to the living, who are capable of enjoying and benefitting from receiving them. But awards also serve to recognize the enduring value of discoveries, and in so doing necessarily extend beyond the life of the recipients. For this reason, we should be willing to credit recently deceased persons for their accomplishments. This might be especially important in instances where an award deserves to be made to a living discoverer, but an equally deserving scientist who died before the award could be made is now deprived of appropriate recognition.

Conclusion

The proper allocation of credit for discovery plays a critical role among the requirements for an effective scientific enterprise to advance our understanding of the natural world, along with brilliant well-trained scientists, supportive institutions, and an effective economic model. Although the desire for credit and its consequences for efficient discovery are longstanding attributes of the enterprise, changes in the conduct and organization of modern science and scientific publishing raise important new challenges to the proper attribution of credit. Academic institutions have the greatest responsibility for establishing an improved credit ecosystem, working in concert with the rapidly evolving scientific publishing system, funders of research for whom credit is a critical variable, and those who bestow awards and prizes.

Since the credit ecosystem is distributed with interconnected incentives operating across diverse organizational contexts, the mission is complex, and rapid change cannot be expected. But this is the time to accelerate the effort, and the place to do it is the academy. I hope this review of the credit ecosystem, and several modest proposals for reform, will facilitate the community taking up the call.

Jeffrey S. Flier
Department of Medicine and Neurobiology, Harvard Medical School, 220 Longwood Avenue, Goldenson 542, Boston MA, 02115.

References

Arrow, K. J. 1962. “Economic Welfare and the Allocation of Resource for Inventions.” In The Rate and Direction of Inventive Activity: Economic and Social Factors, 609–26. Princeton: Princeton University Press.
Arrow, K. J. 1972. “Economic Welfare and the Allocation of Resources for Invention.” In Readings in Industrial Economics, 219–36. Springer.
Berg, J. M., et al. 2016. “Preprints for the Life Sciences.” Science 352 (6288): 899–901.
Biagioli, M. 1998. “The Instability of Authorship: Credit and Responsibility in Contemporary Biomedicine.” FASEB J 12 (1): 3–16.
Breakthrough Prize. 2018. “Breakthrough Prize—‘The Oscars of Science’—Celebrates Top Achievements in Physics, Life Sciences, & Mathematics, Awards $22 Million in Prizes at Gala Televised Ceremony in Silicon Valley.” https://breakthroughprize.org/News/41.
Campbell, P. 1990. “Policy on Paper’s Contributors.” Nature 399 (6735): 393.
Collins, K. 2015. “Why Researchers Keep Citing Retracted Papers.” Quartz. https://qz.com/583497/researchers-keep-citing-these-retracted-papers/.
Csiszar, A. 2018. The Scientific Journal. Chicago: University of Chicago Press.
Dasgupta, P., and P. A. David. 1994. “Toward a New Economics of Science.” Res Policy 23 (5): 487–521.
Fisk, C L. 2006. “Credit Where It’s Due: The Law and Norms of Attribution.” Geo L J 95: 49.
Flier, J. S. 2017a. “Faculty Promotion Must Assess Reproducibility.” Nature 549: 133.
Flier, J. S. 2017b. “Irreproducibility of Published Bioscience Research: Diagnosis, Pathogenesis and Therapy.” Mol Metab 6 (1): 2.
Flier, J. S. 2019. “Starvation in the Midst of Plenty: Reflections on the History and Biology of Insulin and Leptin.” Endocr Rev 40 (1): 1–16. DOI: 10.1210/er.2018–00179.
Goodman, S. N., D. Fanelli, and J. P. A. Ioannidis. 2016. “What Does Research Reproducibility Mean?” Sci Transl Med 8 (341): 341ps12–341ps12.
Greene, M. 2007. “The Demise of the Lone Author.” Nature 450 (7173): 1165.
Haeussler, C., and H. Sauermann. 2013. “Credit Where Credit Is Due? The Impact of Project Contributions and Social Factors on Authorship and Inventorship.” Res Policy 42 (3): 688–703.
Halevi, G., and J. Bar-Ilan. 2016. “Post Retraction Citations in Context.” Proceedings of the Joint Workshop on Bibliometric-Enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL). https://www.aclweb.org/anthology/W16–1503.
Harvard Medical School (HMS) Office for Academic and Research Integrity. 1999. “Authorship Guidelines.” Harvard Medical School. https://hms.harvard.edu/sites/default/files/assets/Sites/Ombuds/files/AUTHORSHIP%20GUIDELINES.pdf.
Harvard University and The Wellcome Trust. 2012. “Report on the International Workshop on Contributorship and Scholarly Attribution.” https://projects.iq.harvard.edu/attribution_workshop.
Hicks, D., et al. 2015. “Bibliometrics: The Leiden Manifesto for Research Metrics.” Nature 520: 429–31.
Hunt, M. 1981. “A Fraud That Shook the World of Science.” NY Times, 1 Nov. https://www.nytimes.com/1981/11/01/magazine/a-fraud-that-shook-the-world-of-science.html.
International Committee of Medical Journal Editors (ICMJE). 2017. “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals.” http://www.icmje.org/icmje-recommendations.pdf.
International Committee of Medical Journal Editors (ICMJE). 2018. “Defining the Roles of Authors and Contributors.” http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html.
Ioannidis, J. P. A. 2005. “Why Most Published Research Findings Are False.” PLoS Med 2 (8): e124.
Jabbehdari, S., and J. P Walsh. 2017. “Authorship Norms and Project Structures in Science.” Sci Technol Hum Values 42 (5): 872–900.
Kolata, G. 2018. “He Promised to Restore Damaged Hearts. Harvard Says His Lab Fabricated Research.” NY Times, 29 Oct. https://www.nytimes.com/2018/10/29/health/dr-piero-anversa-harvard-retraction.html.
Lane, J. 2010. “Let’s Make Science Metrics More Scientific.” Nature 464 (7288): 488.
Lincoln, A. E., et al. 2012. “The Matilda Effect in Science: Awards and Prizes in the US, 1990s and 2000s.” Soc Stud Sci 42 (2): 307–20.
Lissoni, F., F. Montobbio, and L Zirulia. 2013. “Inventorship and Authorship as Attribution Rights: An Enquiry into the Economics of Scientific Credit.” J Econ Behav Organ 95: 49–69.
Maddox, B. 2002. Rosalind Franklin: The Dark Lady of DNA. New York: HarperCollins.
Maddox, B. 2003. “The Double Helix and the ‘Wronged Heroine.’” Nature 421 (6921): 407.
Merton, R. K. 1957. “Priorities in Scientific Discovery: A Chapter in the Sociology of Science.” Am Soc Rev 22 (6): 635–59.
Merton, R. K. 1968. “The Matthew Effect in Science: The Reward and Communication Systems of Science Are Considered.” Science 159 (3810): 56–63.
Merton, R. K. 1969. “Behavior Patterns of Scientists.” Am Scholar 38 (2): 197–225.
National Institutes of Health (NIH) Office of Extramural Research. 2009. “Update on the Requirement for Instruction in the Responsible Conduct of Research.” https://grants.nih.gov/grants/guide/notice-files/NOT-OD-10–019.html.
Offord, C. 2017. “Addressing Biomedical Science’s PhD Problem.” Scientist, 1 Jan.
Patterson, M., and R. Schekman. 2018. “Scientific Publishing: A New Twist on Peer Review.” eLife 7: e36545.
Phillips, D. P., et al. 1991. “Importance of the Lay Press in the Transmission of Medical Knowledge to the Scientific Community. Mass Medical Soc 325 (16): 1180–83.
Rennie, D., V. Yank, and L. Emanuel. 1997. “When Authorship Fails: A Proposal to Make Contributors Accountable.” JAMA 278 (7): 579–85.
Roach, M., and H. Sauermann. 2010. “A Taste for Science? PhD Scientists’ Academic Orientation and Self-Selection into Research Careers in Industry.” Res Policy 39 (3): 422–34.
Sauermann, H., and M. Roach. 2014. “Not All Scientists Pay to Be Scientists: PhDs’ Preferences for Publishing in Industrial Employment.” Res Policy 43 (1): 32–47.
Selya, R. 2003. “Essay Review. Defined by DNA: The Intertwined Lives of James Watson and Rosalind Franklin.” J Hist Biol 36 (3): 591–97.
Seymore, S. B. 2006. “My Patent, Your Patent, or Our Patent-Inventorship Disputes within Academic Research Groups.” Alb L J Sci Tech 16: 125.
Shelton, R. L. 2000. “The Institutional Ombudsman: A University Case Study.” Negotiation J 16 (1): 81–98.
Shen, H.-W., and A.-L. Barabási. 2014. “Collective Credit Allocation in Science.” Proc Natl Acad Sci 111 (34): 12325–30.
Strevens, M. 2003. “The Role of the Priority Rule in Science.” J Philos 100 (2): 55–79.
Thorne, F. C. 1977. “The Citation Index: Another Case of Spurious Validity.” J Clin Psychol 33 (4): 1157–61.
Vale, R. D., and A. A. Hyman. 2016. “Priority of Discovery in the Life Sciences.” Elife 5. DOI: 10.7554/eLife.16931.
Wager, E. 2009. “Recognition, Reward and Responsibility: Why the Authorship of Scientific Papers Matters.” Maturitas 62 (2): 109–12.
Watson, J. D., and F. H. Crick. 1953. “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid.” Nature 171 (4356): 737–38.
Watson, James. 2012. The Double Helix. London: Hachette.
Wuchty, S., B. F. Jones, and B. Uzzi. 2007. “The Increasing Dominance of Teams in Production of Knowledge.” Science 316 (5827): 1036–39.
Zuckerman, H. 1992. “The Proliferation of Prizes: Nobel Complements and Nobel Surrogates in the Reward System of Science.” Theor Med 13 (2): 217–31.

The author wishes to thank the following individuals for helpful comments on various drafts of this paper: Allan Brandt, Nicholas Christakis, Alex Csizsar, Alan Garber, Michael Gimbrone, David Glass, Ron Kahn, Marc Kirschner, Rudolf Leibel, Joseph Loscalzo, Eleftheria Maratos-Flier, Steven O’Rahilly, Scott Podolsky, Lanny Rosenwasser, and Dan Wainstock.

Share